Transformer Reinforcement Learning X
trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset.
Training support for 🤗 Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b
, EleutherAI/gpt-neox-20b
, and google/flan-t5-xxl
. For models beyond 20B parameters, trlX provides NVIDIA NeMo-backed trainers that leverage efficient parallelism techniques to scale effectively.
The following RL algorithms are currently implemented:
Algorithm | Accelerate Trainer | NeMo Trainer |
---|---|---|
Proximal Policy Optimization (PPO) | ✅ | ⏳ |
Implicit Language Q-Learning (ILQL) | ✅ | ✅ |
🧀 CHEESE Collect human annotations for your RL application with our human-in-the-loop data collection library.
Installation
git clone https://github.com/CarperAI/trlx.git
cd trlx
pip install torch --extra-index-url https://download.pytorch.org/whl/cu116 # for cuda
pip install -e .
Examples
For more usage see examples. You can also try the colab notebooks below:
Description | Link |
---|---|
Simulacra (GPT2, ILQL) | |
Sentiment (GPT2, ILQL) |
How to Train
You can train a model using a reward function or a reward-labeled dataset.
Using a reward function
trainer = trlx.train('gpt2', reward_fn=lambda samples, **kwargs: [sample.count('cats') for sample in samples])
Using a reward-labeled dataset
trainer = trlx.train('EleutherAI/gpt-j-6B', samples=['dolphins', 'geese'], rewards=[1.0, 100.0])
Trainers provide a wrapper over their underlying model
trainer.generate(**tokenizer('Q: Who rules the world? A:', return_tensors='pt'), do_sample=True)
Configure Hyperparameters
from trlx.data.default_configs import default_ppo_config, TrainConfig
config = default_ppo_config()
config.model.model_path = 'EleutherAI/gpt-neox-20b'
config.train.seq_length = 32
config.train.batch_size = 16
trainer = trlx.train(config=config, reward_fn=lambda samples, **kwargs: [float(int(sample)) for sample in samples])
Save the resulting model to a Hugging Face pretrained language model. (Ready to upload to the Hub!)
trainer.save_pretrained('/path/to/output/folder/')
Use 🤗 Accelerate to launch distributed training
accelerate config # choose DeepSpeed option
accelerate launch examples/simulacra.py
Use NeMo-Megatron to launch distributed training
Follow the setup instructions in the NeMo README.
python examples/nemo_ilql_sentiments.py
For more usage see the NeMo README
Use Ray Tune to launch hyperparameter sweep
python -m trlx.sweep --config configs/sweeps/ppo_sweep.yml examples/ppo_sentiments.py
Logging
trlX uses the standard Python logging
library to log training information to the console. The default logger is set to the INFO
level, which means that INFO
, WARNING
, ERROR
, and CRITICAL
level messages will be printed to standard output.
To change the log level directly, you can use the verbosity setter. For example, to set the log level to WARNING
use:
import trlx
trlx.logging.set_verbosity(trlx.logging.WARNING)
This will suppress INFO
level messages, but still print WARNING
, ERROR
, and CRITICAL
level messages.
You can also control logging verbosity by setting the TRLX_VERBOSITY
environment variable to one of the standard logging level names:
CRITICAL
(trlx.logging.CRITICAL
)ERROR
(trlx.logging.ERROR
)WARNING
(trlx.logging.WARNING
)INFO
(trlx.logging.INFO
)DEBUG
(trlx.logging.DEBUG
)
export TRLX_VERBOSITY=WARNING
By default, tqdm
progress bars are used to display training progress. You can disable them by calling trlx.logging.disable_progress_bar()
, otherwise trlx.logging.enable_progress_bar()
to enable.
Messages can be formatted with greater detail by setting trlx.logging.enable_explicit_format()
. This will inject call-site information into each log which may be helpful for debugging.
[2023-01-01 05:00:00,000] [INFO] [ppo_orchestrator.py:63:make_experience] [RANK 0] Message...
💡 Tip: To reduce the amount of logging output, you might find it helpful to change log levels of third-party libraries used by trlX. For example, try adding
transformers.logging.set_verbosity_error()
to the top of your trlX scripts to silence verbose messages from thetransformers
library (see their logging docs for more details).
Contributing
For development check out these guidelines and also read our docs
Acknowledgements
Many thanks to Leandro von Werra for contributing with trl, a library that initially inspired this repo.