This project provides a template for building OpenAI Custom GPT Actions with microservice pattern using Event Driver Architecture. . The template leverages various technologies and tools to facilitate efficient development, testing, deployment, and CI/CD. The core technologies and their roles are as follows:
- Python: The primary programming language used for developing GPT Actions.
- Poetry: Dependency management and packaging tool for Python projects.
- FastAPI: Modern, fast (high-performance) web framework for building APIs with Python 3.7+ based on standard Python type hints.
- SQLModel: SQL databases in Python, designed for simplicity, compatibility, and robustness. It’s built on top of Pydantic and SQLAlchemy.
- Postgres: Powerful, open-source object-relational database system.
- Kafka: Distributed event streaming platform capable of handling trillions of events a day.
- Kong: Cloud-native, fast, scalable, and distributed API gateway.
- Docker: Platform for developing, shipping, and running applications in containers.
- DevContainer: Development environments hosted in containers to ensure consistency across different environments.
- Kubernetes: Container orchestration system for automating deployment, scaling, and management of containerized applications.
- Terraform: Infrastructure as Code (IaC) tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.
- testcontainers: Provides lightweight, disposable instances of common databases, Selenium web browsers, or anything else that can run in a Docker container, for testing.
- GitHub Actions: CI/CD tool that automates workflows, including testing and deployment.
- VSCode: Free source-code editor made by Microsoft for Windows, Linux, and macOS.
- PgAdmin: Open-source administration and development platform for PostgreSQL.
The template is designed with a microservice pattern and Event-Driven Architecture to ensure each GPT Action is isolated, scalable, and easy to manage. Here’s an overview of how the components interact:
- Microservices: Each component is a separate microservice built using FastAPI and SQLModel, containerized with Docker.
- Event-Driven Architecture: Kafka is used for event streaming, enabling real-time data processing and communication between microservices.
- API Gateway: Kong serves as the API gateway, routing requests to the appropriate microservice.
- Database: Postgres is used for persistent data storage.
- Development Environment: DevContainer ensures a consistent development environment, and VSCode provides a powerful IDE.
- Deployment: Kubernetes manages the containerized applications, and Terraform handles infrastructure provisioning.
- CI/CD: GitHub Actions automate the testing and deployment processes.
- Testing: testcontainers facilitate isolated and reliable testing environments.
-
Microservice Architecture:
- Microservices break down the application into smaller, independent services that can be developed, deployed, and scaled independently.
- Benefits: Scalability, flexibility, and resilience.
-
Event-Driven Architecture:
- Utilizes events to trigger and communicate between decoupled services.
- Benefits: Real-time data processing, improved scalability, and fault tolerance.
-
FastAPI:
- High-performance, easy-to-use web framework for building APIs.
- Benefits: Automatic interactive API documentation, high performance, and easy integration with asynchronous libraries.
-
SQLModel:
- Simplifies working with SQL databases and provides a convenient way to define models and perform queries.
- Benefits: Combines the power of SQLAlchemy and Pydantic, type annotations, and easy model definition.
-
Postgres:
- Robust and reliable relational database system.
- Benefits: ACID compliance, extensibility, and strong community support.
-
Kafka:
- Distributed event streaming platform for high-throughput, low-latency data processing.
- Benefits: Scalability, durability, and fault-tolerance.
-
Kong:
- API gateway for managing, monitoring, and securing API requests.
- Benefits: Load balancing, rate limiting, and request transformation.
-
Docker:
- Simplifies the development and deployment process by packaging applications in containers.
- Benefits: Consistency across environments, isolation, and portability.
-
Kubernetes:
- Manages containerized applications across multiple hosts, providing deployment, scaling, and management capabilities.
- Benefits: Automated scaling, self-healing, and efficient resource utilization.
-
Terraform:
- Infrastructure as code tool for provisioning and managing cloud infrastructure.
- Benefits: Version control, automation, and reproducibility.
-
Testcontainers:
- Enables running Docker containers for integration tests, ensuring a consistent testing environment.
- Benefits: Reliable testing, isolation, and easy setup.
-
GitHub Actions:
- Automates workflows for building, testing, and deploying applications.
- Benefits: Continuous integration and delivery, automation, and integration with GitHub repositories.
-
Clone the Repository:
git clone https://github.com/your-repo/openai-custom-gpt-action-template.git cd openai-custom-gpt-action-template
-
Install Dependencies:
poetry install
-
Run the Application:
poetry run uvicorn src.main:app --reload
-
Run Tests:
poetry run pytest
-
Build Docker Image:
docker build -t custom-gpt-action-template .
-
Run Docker Container:
docker compose up
-
Deploy to Kubernetes:
kubectl apply -f k8s-deployment.yml
-
Manage Infrastructure with Terraform:
terraform init terraform apply
- Configure GitHub Actions Workflow:
- The CI/CD pipeline is defined in
.github/workflows/ci-cd.yml
. - Ensure your repository has the necessary secrets configured for Docker, Kubernetes, and other integrations.
- The CI/CD pipeline is defined in
- Modularity: Each component can be developed, tested, and deployed independently, allowing for greater flexibility and maintainability.
- Scalability: Easily scale individual services based on demand, ensuring optimal resource utilization.
- Reliability: Enhanced fault tolerance and resilience through service isolation and event-driven communication.
- Efficiency: Streamlined development and deployment processes with Docker, Kubernetes, and CI/CD integration.
- Consistency: Consistent development, testing, and production environments using containers and infrastructure as code.
By leveraging these technologies, this template provides a robust foundation for developing, testing, and deploying OpenAI Custom GPT Actions in a microservice pattern using event-driven architecture.