lucidrains/PaLM-rlhf-pytorch

✨ 😅 Is possibale to use the ChatGPT of OpenAI to train this ChatGPT?

Yonv1943 opened this issue · 8 comments

OpenAI used 40 people when training their own chatGPT, and the annotation process lasted for 3 months.

It is difficult for our open source community (github) to reproduce the Reinforcement Learning by Human Feedback (RLHF) for this work, as OpenAI also employs 40 people to complete human feedback.

However, we can treat OpenAI's web version of chatGPT as human, who can annotate data ✨ for us when training our own chatGPT.

Step 2, A labeler (human or OpenAI chatGPT) ranks the outputs from best to worst.

chatgpt.png

This sounds a bit funny😅, but I currently think it's doable.
@lucidrains

You means that "it is forbidden by the Terms of Service (ToS) of OpenAI ChatGPT".
Thank you for your response to this issue.

Maybe the open source community can have other ways to train chatGPT, especially the RL Human Feedback part in Step 2.

You means that "it is forbidden by the Terms of Service (ToS) of OpenAI ChatGPT". Thank you for your response to this issue.

Maybe the open source community can have other ways to train chatGPT, especially the RL Human Feedback part in Step 2.

Using NEOX for RLAIF as a substitute for RLHF may be a plausible solution. Anthropic showed promising results with synthetic data generation. The nonprofit Ought was able to successfully train a reward model with RLAIF for summarization with NEO (1.3b).

I am working with CarperAI and a small group to open-source a few datasets as part of a bigger project relating to this. Harrison Chase and John Nay of LangChain also offered to help. We plan to generate synthetic data for different tasks relating to SFT, RLAIF, CoT, and training the reward models.

It is possibale to use the ChatGPT of OpenAI to train our own ChatGPT.

The figure below illustrates how we obtained the Alpaca model. For the data, we generated instruction-following demonstrations by building upon the self-instruct method. We started with the 175 human-written instruction-output pairs from the self-instruct seed set. We then prompted text-davinci-003 to generate more instructions using the seed set as in-context examples. We improved over the self-instruct method by simplifying the generation pipeline (see details in GitHub) and significantly reduced the cost. Our data generation process results in 52K unique instructions and the corresponding outputs, which costed less than $500 using the OpenAI API.

https://crfm.stanford.edu/2023/03/13/alpaca.html

https://github.com/tatsu-lab/stanford_alpaca

It is possibale to use the ChatGPT of OpenAI to train our own ChatGPT.

The figure below illustrates how we obtained the Alpaca model. For the data, we generated instruction-following demonstrations by building upon the self-instruct method. We started with the 175 human-written instruction-output pairs from the self-instruct seed set. We then prompted text-davinci-003 to generate more instructions using the seed set as in-context examples. We improved over the self-instruct method by simplifying the generation pipeline (see details in GitHub) and significantly reduced the cost. Our data generation process results in 52K unique instructions and the corresponding outputs, which costed less than $500 using the OpenAI API.

https://crfm.stanford.edu/2023/03/13/alpaca.html

https://github.com/tatsu-lab/stanford_alpaca

Thanks a lot for your insightful sharing :)

Can you explain please how the method you used for training is compatible with ChatGPT and LLAMA 2 ToS?