|
|
@@ -21,13 +21,14 @@ pip install -q git+https://github.com/huggingface/peft.git
|
|
|
|
|
|
2. [Install bitsandbytes from source](https://github.com/TimDettmers/bitsandbytes/blob/main/compile_from_source.md)
|
|
|
|
|
|
-### Inference
|
|
|
+### Inference (`generate.py`)
|
|
|
|
|
|
See `generate.py`. This file reads the `decapoda-research/llama-7b-hf` model from the Huggingface model hub and the LoRA weights from `tloen/alpaca-lora-7b`, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.
|
|
|
|
|
|
-### Training
|
|
|
+### Training (`finetune.py`)
|
|
|
|
|
|
-Under construction.
|
|
|
+Under construction. If you're impatient, note that this file contains a set of hardcoded hyperparameters you should feel free to modify.
|
|
|
+PRs adapting this code to multi-GPU setups and larger models are always welcome.
|
|
|
|
|
|
### To do
|
|
|
|