暂无描述

Eric Wang bf4ca26b21 clean up unused files 3 年之前
.gitignore 26f64780ad initial commit 3 年之前
DATA_LICENSE 63121244c8 Licenses and whatnot 3 年之前
LICENSE 63121244c8 Licenses and whatnot 3 年之前
README.md bf4ca26b21 clean up unused files 3 年之前
alpaca_data.json 26f64780ad initial commit 3 年之前
finetune.py 41e0ff6c78 tokenizer changes 3 年之前
generate.py 357ec81a17 decapoda 3 年之前
lengths.ipynb 13d55f437e update lengths notebook 3 年之前

README.md

🦙🌲🤏 Alpaca (Low-Rank Edition)

The code in this repo is not yet fully tested. I'm still retraining the model with the outputs included. The goal is to have the code in generate.py be fully functional.

This repository contains code for reproducing the Stanford Alpaca results. Users will need to be ready to fork transformers to access Jason Phang's LLaMA implementation. For fine-tuning we use PEFT to train low-rank approximations over the LLaMA foundation model. Included also is code to download this model from the Huggingface model hub. (Only run this code if you have permission from Meta Platforms Inc.!) Once I've finished running the finetuning code myself, I'll put the LoRA on the Hub as well, and the code in generate.py should work as expected.

Setup

  1. Install dependencies (install zphang's transformers fork)

    pip install -q datasets accelerate loralib sentencepiece
    
    pip install -q git+https://github.com/zphang/transformers@llama_push
    pip install -q git+https://github.com/huggingface/peft.git
    
  2. Install bitsandbytes from source

Inference

See generate.py. This file reads the decapoda-research/llama-7b-hf model from the Huggingface model hub and the LoRA weights from tloen/alpaca-lora-7b, and runs inference on a specified input. Users should treat this as example code for the use of the model, and modify it as needed.

Training

Under construction.

To do

  • Hyperparameter tuning
  • Documentation for notebook
  • Support for 13b, 30b, 65b
  • Train a version that doesn't waste tokens on the prompt header
  • Inference CLI and evaluation
  • Better disclaimers about why using LLaMA without permission is very bad!