|
@@ -1,9 +1,7 @@
|
|
|
## 🦙🌲🤏 Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning
|
|
## 🦙🌲🤏 Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning
|
|
|
|
|
|
|
|
- 🤗 **Try the pretrained model out [here](https://huggingface.co/spaces/tloen/alpaca-lora), courtesy of a GPU grant from Huggingface!**
|
|
- 🤗 **Try the pretrained model out [here](https://huggingface.co/spaces/tloen/alpaca-lora), courtesy of a GPU grant from Huggingface!**
|
|
|
-- Share custom LoRA adapters, including adapters for the larger models, [here](https://github.com/tloen/alpaca-lora/issues/52)
|
|
|
|
|
- Users have created a Discord server for discussion and support [here](https://discord.gg/prbq284xX5)
|
|
- Users have created a Discord server for discussion and support [here](https://discord.gg/prbq284xX5)
|
|
|
-- `alpaca-lora-30b` can be used like ChatGPT; see [here](https://twitter.com/algo_diver/status/1637851640027041798)
|
|
|
|
|
|
|
|
|
|
This repository contains code for reproducing the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) results using [low-rank adaptation (LoRA)](https://arxiv.org/pdf/2106.09685.pdf).
|
|
This repository contains code for reproducing the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) results using [low-rank adaptation (LoRA)](https://arxiv.org/pdf/2106.09685.pdf).
|
|
|
We provide an Instruct model of similar quality to `text-davinci-003` that can run [on a Raspberry Pi](https://twitter.com/miolini/status/1634982361757790209) (for research),
|
|
We provide an Instruct model of similar quality to `text-davinci-003` that can run [on a Raspberry Pi](https://twitter.com/miolini/status/1634982361757790209) (for research),
|
|
@@ -62,6 +60,30 @@ as well as [clusters of bad examples](https://atlas.nomic.ai/map/d2139cc3-bc1c-4
|
|
|
- We can likely improve our model performance significantly if we had a better dataset. Consider supporting the [LAION Open Assistant](https://open-assistant.io/) effort to produce a high-quality dataset for supervised fine-tuning (or bugging them to release their data).
|
|
- We can likely improve our model performance significantly if we had a better dataset. Consider supporting the [LAION Open Assistant](https://open-assistant.io/) effort to produce a high-quality dataset for supervised fine-tuning (or bugging them to release their data).
|
|
|
- We're continually fixing bugs and conducting training runs, and the weights on the Hugging Face Hub are being updated accordingly. In particular, those facing issues with response lengths should make sure that they have the latest version of the weights and code.
|
|
- We're continually fixing bugs and conducting training runs, and the weights on the Hugging Face Hub are being updated accordingly. In particular, those facing issues with response lengths should make sure that they have the latest version of the weights and code.
|
|
|
|
|
|
|
|
|
|
+### Resources
|
|
|
|
|
+
|
|
|
|
|
+- [alpaca.cpp](https://github.com/antimatter15/alpaca.cpp), a native client for running Alpaca models on the CPU
|
|
|
|
|
+- [Alpaca-LoRA-Serve](https://github.com/deep-diver/Alpaca-LoRA-Serve), a ChatGPT-style interface for Alpaca models
|
|
|
|
|
+- [AlpacaDataCleaned](https://github.com/gururise/AlpacaDataCleaned), a project to improve the quality of the Alpaca dataset
|
|
|
|
|
+- Various adapter weights (download at own risk):
|
|
|
|
|
+ - 7B (decapoda-research/llama-7b-hf):
|
|
|
|
|
+ - https://huggingface.co/tloen/alpaca-lora-7b
|
|
|
|
|
+ - https://huggingface.co/samwit/alpaca7B-lora
|
|
|
|
|
+ - 🇧🇷 https://huggingface.co/22h/cabrita-lora-v0-1
|
|
|
|
|
+ - 🇨🇳 https://huggingface.co/qychen/luotuo-lora-7b-0.1
|
|
|
|
|
+ - 🇯🇵 https://huggingface.co/kunishou/Japanese-Alapaca-LoRA-7b-v0
|
|
|
|
|
+ - 13B (decapoda-research/llama-13b-hf):
|
|
|
|
|
+ - https://huggingface.co/chansung/alpaca-lora-13b
|
|
|
|
|
+ - https://huggingface.co/mattreid/alpaca-lora-13b
|
|
|
|
|
+ - https://huggingface.co/samwit/alpaca13B-lora
|
|
|
|
|
+ - 🇯🇵 https://huggingface.co/kunishou/Japanese-Alapaca-LoRA-13b-v0
|
|
|
|
|
+ - 🇰🇷 https://huggingface.co/chansung/koalpaca-lora-13b
|
|
|
|
|
+ - 30B: (decapoda-research/llama-30b-hf)
|
|
|
|
|
+ - https://huggingface.co/baseten/alpaca-30b
|
|
|
|
|
+ - https://huggingface.co/chansung/alpaca-lora-30b
|
|
|
|
|
+ - 🇯🇵 https://huggingface.co/kunishou/Japanese-Alapaca-LoRA-30b-v0
|
|
|
|
|
+- [alpaca-native](https://huggingface.co/chavinlo/alpaca-native), a replication using the original Alpaca code
|
|
|
|
|
+
|
|
|
### Example outputs
|
|
### Example outputs
|
|
|
|
|
|
|
|
**Instruction**: Tell me about alpacas.
|
|
**Instruction**: Tell me about alpacas.
|