|
|
hace 3 años | |
|---|---|---|
| .gitignore | hace 3 años | |
| DATA_LICENSE | hace 3 años | |
| LICENSE | hace 3 años | |
| README.md | hace 3 años | |
| alpaca_data.json | hace 3 años | |
| conversion.py | hace 3 años | |
| finetune.py | hace 3 años | |
| generate.py | hace 3 años | |
| lengths.ipynb | hace 3 años | |
| loss.ipynb | hace 3 años |
This repository contains code for reproducing the Stanford Alpaca results. Users will need to be ready to fork transformers.
Install dependencies (install zphang's transformers fork)
pip install -q datasets accelerate loralib sentencepiece
pip install -q git+https://github.com/zphang/transformers@llama_push
pip install -q git+https://github.com/huggingface/peft.git
See generate.py. This file reads the decapoda-research/llama-7b-hf model from the Huggingface model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
Under construction.
13b, 30b, 65b