Eric Wang преди 3 години
родител
ревизия
fb9d9832e7
променени са 3 файла, в които са добавени 110 реда и са изтрити 0 реда
  1. 1 0
      README.md
  2. 0 0
      alpaca_data_cleaned_archive.json
  3. 109 0
      alpaca_data_gpt4.json

+ 1 - 0
README.md

@@ -2,6 +2,7 @@
 
 - 🤗 **Try the pretrained model out [here](https://huggingface.co/spaces/tloen/alpaca-lora), courtesy of a GPU grant from Huggingface!**
 - Users have created a Discord server for discussion and support [here](https://discord.gg/prbq284xX5)
+- 4/6: Repo has been updated with Microsoft Research's [LLaMA-GPT4 dataset](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM).
 
 This repository contains code for reproducing the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) results using [low-rank adaptation (LoRA)](https://arxiv.org/pdf/2106.09685.pdf).
 We provide an Instruct model of similar quality to `text-davinci-003` that can run [on a Raspberry Pi](https://twitter.com/miolini/status/1634982361757790209) (for research),

+ 0 - 0
alpaca_data_cleaned.json → alpaca_data_cleaned_archive.json


Файловите разлики са ограничени, защото са твърде много
+ 109 - 0
alpaca_data_gpt4.json


Някои файлове не бяха показани, защото твърде много файлове са промени