This project offers a suite of tools designed for evaluating and fine-tuning LLama models in the context of prompt scoring. It features scripts for adding base prompts with constraints to a database, fine-tuning models, generating performance graphs, and performing detailed comparisons between models. The code requires API keys which can be added to the keys.py file.
Ensure you have Python 3 installed on your system. You can download and install it from the official Python website:
To set up your local development environment, follow these steps:
# Clone the repository
git clone [repository-url]
# Navigate to the project directory
cd [project-directory]
To add a new base prompt and append constraints which are then stored in the database for evaluation: For your evaluation we already have the prmompts and there scores stored in Prompts DB.
python3 PromptScore.py
To initiate the training process and fine-tune the LLama model for evaluating the prompt score:
python3 PromptScore_Llama.py
Graphs for each model can be generated by running individual scripts located in the /llms
directory:
python3 /llms/file_name.py
Replace file_name.py
with the actual script name you intend to run.
For comparing multiple models based on specified criteria:
python3 compare.py
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
To contribute to the project, please follow these steps:
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
For support, open an issue through GitHub or contact the project maintainers directly.
This project is licensed under the MIT License - see the LICENSE file for details.