A structured template for building robust generative AI applications, with modular organization and best practices built-in.
- Modular project structure for scalability
- Pre-configured support for multiple LLM providers (Claude, GPT)
- Built-in prompt engineering utilities
- Rate limiting and token management
- Robust error handling
- Caching mechanism for API responses
- Example implementations and notebooks
generative_ai_project/
├── config/ # Configuration directory
│ ├── __init__.py
│ ├── model_config.yaml # Model-specific configurations
│ ├── prompt_templates.yaml # Prompt templates
│ └── logging_config.yaml # Logging settings
│
├── src/ # Source code
│ ├── llm/ # LLM clients
│ │ ├── base.py # Base LLM client
│ │ ├── claude_client.py # Anthropic Claude client
│ │ ├── gpt_client.py # OpenAI GPT client
│ │ └── utils.py # Shared utilities
│ │
│ ├── prompt_engineering/ # Prompt engineering tools
│ │ ├── templates.py # Template management
│ │ ├── few_shot.py # Few-shot prompt utilities
│ │ └── chain.py # Prompt chaining logic
│ │
│ ├── utils/ # Utility functions
│ │ ├── rate_limiter.py # API rate limiting
│ │ ├── token_counter.py # Token counting
│ │ ├── cache.py # Response caching
│ │ └── logger.py # Logging utilities
│ │
│ └── handlers/ # Error handling
│ └── error_handler.py
│
├── data/ # Data directory
│ ├── cache/ # Cache storage
│ ├── prompts/ # Prompt storage
│ ├── outputs/ # Output storage
│ └── embeddings/ # Embedding storage
│
├── examples/ # Example implementations
│ ├── basic_completion.py
│ ├── chat_session.py
│ └── chain_prompts.py
│
└── notebooks/ # Jupyter notebooks
├── prompt_testing.ipynb
├── response_analysis.ipynb
└── model_experimentation.ipynb
- Clone the repository:
git clone https://github.com/yourusername/generative_ai_project.git
cd generative_ai_project- Install dependencies:
pip install -r requirements.txt-
Configure your environment:
- Copy
config/model_config.yaml.exampletoconfig/model_config.yaml - Add your API keys and configurations
- Copy
-
Review the examples in
examples/directory -
Start with the notebooks in
notebooks/for experimentation
model_config.yaml: Configure API keys and model parametersprompt_templates.yaml: Define reusable prompt templateslogging_config.yaml: Configure logging behavior
-
LLM Clients (
src/llm/)- Base client with common functionality
- Specific implementations for different providers
- Utility functions for token counting and rate limiting
-
Prompt Engineering (
src/prompt_engineering/)- Template management system
- Few-shot prompt utilities
- Prompt chaining capabilities
-
Utilities (
src/utils/)- Rate limiting for API calls
- Token counting
- Response caching
- Logging
- Keep configuration in YAML files
- Implement proper error handling
- Use rate limiting for APIs
- Maintain separation between model clients
- Cache results when appropriate
- Document your code
- Use notebooks for experimentation
- Follow modular design principles
- Write tests for new components
- Use proper version control
- Keep documentation updated
- Monitor API usage and limits
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Brij Kishore Pandey
For any queries, reach out to:
- GitHub: @honestsoul
- Email: brij.pydata@gmail.com
⭐ If you find this template useful, please consider giving it a star!
