openpsi-project/ReaLHF

grpo has not prm

Opened this issue · 8 comments

grpo has the step level reward deal,also known as progress reward model,but not seen in the code, can you tell the reason or how to use step level deal ? thanks

Sorry for the late reply.

PRM or ORM are similar. The current code here simply extracts scores at the end of each sentence. You can modify the model interface to utilize scores at all positions (or at step level, such as scores outputed at all "comma" tokens), just like how we use values in PPO.

This example may also be helpful.

We'd like to help you if you encounter any issues during implementation.

thanks,i will run on this repo

have seen the example from you ,it is about to use ppo to optimize sentiment class. in my knowledge and with other colleague communication, none has got the benefit so far. can you tell me it is just a example to learn,or you have got benefit to train the classification task with rl like ppo?

Sorry for the late reply.

PRM or ORM are similar. The current code here simply extracts scores at the end of each sentence. You can modify the model interface to utilize scores at all positions (or at step level, such as scores outputed at all "comma" tokens), just like how we use values in PPO.

This example may also be helpful.

We'd like to help you if you encounter any issues during implementation.
about this case
https://github.com/openpsi-project/ReaLHF/blob/main/examples/customized_exp/ppo_sentiment.py

It's just an example to learn. You can customize the interface to do what you want to do, either training the PRM or using the PRM for PPO. We feel sorry that we don't have the bandwidth to provide all reference implementations.

It's just an example to learn. You can customize the interface to do what you want to do, either training the PRM or using the PRM for PPO. We feel sorry that we don't have the bandwidth to provide all reference implementations.

thanks

It's just an example to learn. You can customize the interface to do what you want to do, either training the PRM or using the PRM for PPO. We feel sorry that we don't have the bandwidth to provide all reference implementations.

thanks

can you build a wechat group for tech,and let me in ? my: yiyepiaoling0715

Sorry for the late reply. Just requested in wechat.