- π Table of Contents
- π LLM Instruction tuning for school math questions
- πΊοΈ Roadmap
- βοΈ License
- π Links
- π References & Citations
End-to-end MLOps LLM instruction finetuning based on PEFT & QLoRA to solve math problems.
Base LLM: OpenLLaMA
Dataset: Grade School Math Instructions Dataset
- NLP: PyTorch, Hugging Face Transformers, Accelerate, PEFT
- Research: Jupyter Lab, MLflow
- Framework: FastAPI
- Deployment: Docker, Amazon Web Services (AWS), GitHub Actions
- Version Control: Git, DVC, GitHub
Project structure template can be found here.
βββ LICENSE
βββ Makefile <- Makefile with commands like `make data` or `make train`
βββ README.md <- The top-level README for developers using this project.
βββ requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
β generated with `pip freeze > requirements.txt`
|
βββ config <- Stores pipelines' configuration files
| βββ data-config.yaml
| βββ model-config.yaml
| βββ model-parameters.yaml
|
βββ data
β βββ external <- Data from third party sources.
β βββ interim <- Intermediate data that has been transformed.
β βββ processed <- The final, canonical data sets for modeling.
β βββ raw <- The original, immutable data dump.
β
βββ assets <- Store public assets for readme file
βββ docs <- A default Sphinx project; see sphinx-doc.org for details
β
βββ models <- Trained and serialized models, model predictions, or model summaries
β
βββ notebooks <- Jupyter notebooks for research.
β
βββ setup.py <- Make this project pip installable with `pip install -e`
βββ src <- Source code for use in this project.
β βββ __init__.py <- Makes src a Python module
β β
| βββ logging <- Define loggers for the app
| βββ utils
| | βββ __init__.py
| | βββ common.py <- Functions for common utilities
| |
β βββ data <- Scripts to download or generate data
| | βββ components <- Classes for pipelines
| | βββ pipeline <- Scripts for data aggregation
| | βββ configuration.py <- Class to manage config files
| | βββ entity.py <- Stores configuration dataclasses
β β βββ make_dataset.py <- Script to run data pipelines
β β
β βββ models <- Scripts to train models and then use trained models to make
β β predictions
| βββ components <- Classes for pipelines
| βββ pipeline <- Scripts for data aggregation
| βββ configuration.py <- Class to manage config files
| βββ entity.py <- Stores configuration dataclasses
β βββ predict_model.py <- Script to run prediction pipeline
β βββ train_model.py <- Script to run model pipelines
β
βββ main.py <- Script to run model training pipeline
βββ app.py <- Script to start FastApi app
|
βββ .env.example <- example .env structure
βββ Dockerfile <- configurates Docker container image
βββ .github
| βββ workflows
| βββ main.yaml <- CI/CD config
|
βββ .gitignore <- specify files to be ignored by git
βββ .dvcignore <- specify files to be ignored by dvc
|
βββ .dvc <- dvc config
βββ dvc.lock <- store dvc tracked information
βββ dvc.yaml <- specify pipeline version control
- Clone the project
git clone https://github.com/Logisx/LLMath-QLoRA
- Go to the project directory
cd my-project
- Install dependencies
pip install -r requirements.txt
- Start the app
python app.py
- Testing features: Develop unit tests and integrations test
- Hyperparameter tuning: Train a better model by hyperparameter tuning
- User interface: Create a frienly app interface
- Efficient Fine-Tuning with LoRA: A Guide to Optimal Parameter Selection for Large Language Models
- Grade School Math Instructions Fine-Tune OPT
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}