A Colossal-AI implementation of Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance. We reproduced the model architect and applied multiple optimization stategies, e.g. data parallelism, tensor parallelism & ZeRO, to scale the training to mulple-GPUs with teh help of Colosssal-AI.
You are very welcome to contribute in any way to help us enhance the usability of this project.
- Install Colosssal-AI, which is a Pytorch-based large-scale model training system with various efficient parallelization techniques.
pip install colossalai
- Use HuggingFace datasets to download Wikitext-2 dataset. The placeholder
/PATH/TO/DATA
is optional and is./wiki_dataset
by default.
python ./tools/download_wiki.py -o </PATH/TO/DATA>
- Download tokenizer files by calling the following command. The place holder
/PATH/TO/TOKENIZER/
is optional and is./token
by default.
bash ./tools/download_token.py </PATH/TO/TOKENIZER/>
- Configure your settings in
CONFIG_FILE.py
, for example
SEQ_LENGTH = 2048
BATCH_SIZE = 8
NUM_EPOCHS = 10
parallel = dict(
tensor=dict(mode='1d', size=2),
)
model = "palm_small"
We have provided some in ./configs 2. Run
DATA=/PATH/TO/DATA/ TOKENIZER=/PATH/TO/TOKENIZER/ torchrun --nproc_per_node=NUM_GPUS train.py --from_torch --config CONFIG_FILE.py
Dockerfile is provided in this repository and you can run PaLM in Docker with the following commands.
# build docker image
docker build -t palm .
# exec training
docker run -ti --gpus all --rm palm torchrun --nproc_per_node 8 train.py --from_torch --config configs/palm_zero.py