Нема описа

Eric Wang 15fb2b178b lengths cleanup пре 3 година
.gitignore 26f64780ad initial commit пре 3 година
DATA_LICENSE 63121244c8 Licenses and whatnot пре 3 година
LICENSE 63121244c8 Licenses and whatnot пре 3 година
README.md 0b8b0e0f90 Update README.md пре 3 година
alpaca_data.json 26f64780ad initial commit пре 3 година
conversion.py 26f64780ad initial commit пре 3 година
finetune.py 41e0ff6c78 tokenizer changes пре 3 година
generate.py 357ec81a17 decapoda пре 3 година
lengths.ipynb 15fb2b178b lengths cleanup пре 3 година
loss.ipynb 357ec81a17 decapoda пре 3 година

README.md

🦙🌲🤏 Alpaca (Low-Rank Edition)

The code in this repo does not yet work. I'm still retraining the model with the outputs included.

This repository contains code for reproducing the Stanford Alpaca results. Users will need to be ready to fork transformers.

Setup

  1. Install dependencies (install zphang's transformers fork)

    pip install -q datasets accelerate loralib sentencepiece
    
    pip install -q git+https://github.com/zphang/transformers@llama_push
    pip install -q git+https://github.com/huggingface/peft.git
    
  2. Install bitsandbytes from source

Inference

See generate.py. This file reads the decapoda-research/llama-7b-hf model from the Huggingface model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Training

Under construction.

To do

  • Hyperparameter tuning
  • Documentation for notebook
  • Support for 13b, 30b, 65b
  • Inference CLI and evaluation
  • Better disclaimers about why using LLaMA without permission is very bad!