The FountainAI project is a collection of independently developed microservices, each encapsulated in its own GitHub repository and designed as a FastAPI application. This guide provides a centralized approach to deploying, managing, and testing the FountainAI ecosystem using Docker Compose.
The FountainAI-Compose repository is dedicated to managing the full-stack deployment of all FountainAI services, allowing each service to function as an isolated, containerized application that integrates seamlessly in the FountainAI ecosystem.
- Enable modular, specification-driven local development for each FountainAI microservice.
- Provide a centralized Docker Compose setup for integration testing, local full-stack development, and production deployment.
- Outline best practices for transitioning from local development in individual repositories to full-stack integration within FountainAI-Compose.
- Centralize key information in the OpenAPI specifications, making them the single source of truth for each microservice.
The FountainAI system consists of the following microservices, each hosted in its own GitHub repository:
- Ensemble Service - The main, public-facing service that routes requests to other internal FountainAI services.
- Story Factory Service - Manages the generation and orchestration of story components.
- Spoken Word Service - Handles processing and generation of spoken lines.
- Core Script Management Service - Manages scripts, scenes, and core narrative structures.
- Session and Context Service - Manages user sessions and contextual information.
- Character Service - Manages character data and attributes within the FountainAI environment.
- Performer Service - Manages performer profiles and assignments to characters.
- Central Sequence Service - Handles sequencing and chronological data across the FountainAI services.
- Paraphrase Service - Manages paraphrasing and text alternatives.
- Action Service - Manages actions associated with characters and narrative events.
Each microservice is developed, tested, and deployed independently. The Ensemble Service is the only public-facing component, responsible for routing requests to internal services. All other services operate within an internal Docker network managed by Docker Compose.
Each service is developed in its own GitHub repository, focusing exclusively on Dockerized local development without introducing inter-service dependencies or Docker Compose configurations. This allows each service to:
- Be developed and tested independently.
- Use a self-contained
Dockerfilefor containerization. - Avoid any dependency on other FountainAI services for local development.
-
Clone the Microservice Repository: Clone the repository for the microservice you wish to work on, for example:
git clone https://github.com/Contexter/FountainAI-Action-Service.git cd FountainAI-Action-Service -
Build and Run the Dockerized Service: Each service includes a
Dockerfilethat handles containerization for local development. Use the following commands to build and run the service:docker build -t fountainai-action-service . docker run -p 8000:8000 fountainai-action-serviceThis setup runs the service on
localhost:8000, enabling you to test endpoints independently. -
Testing and Validation:
- Use tools like Postman or curl to interact with the API endpoints.
- Run any included tests (e.g., using pytest) to validate functionality in isolation.
- The service can be developed, refined, and tested independently of other services until ready for integration.
-
Script-Based Automation (Optional): Some services may include modular scripts (e.g.,
generate_dockerfile.py,generate_models.py,generate_routes.py) that automate setup based on OpenAPI specifications. These scripts streamline the initial setup and ensure consistency with the API specification.
No docker-compose.yml file should be included in individual service repositories. All inter-service dependencies and network configurations are managed in this centralized FountainAI-Compose repository.
The FountainAI-Compose repository contains the centralized Docker Compose configuration (docker-compose.yml) for deploying and testing all services together. This setup facilitates:
- Full-Stack Development: Run all services in a unified environment to test inter-service interactions.
- Production-Ready Deployment: Centralize environment variables, manage network configurations, and control service dependencies.
- Consistent Service Discovery: Use Docker's internal networking, allowing services to communicate with each other via service names (e.g.,
http://action-service:8001).
The FountainAI-Compose repository is organized as follows:
FountainAI-Compose/
├── README.md
└── docker-compose.yml
- docker-compose.yml: Defines each FountainAI service, specifying its Docker image or build context and relevant configurations for centralized deployment.
- README.md: The current guide, providing setup instructions for deploying FountainAI in a unified environment.
The docker-compose.yml file in the FountainAI-Compose repository references each service by its GitHub repository, ensuring the latest code is pulled or by using pre-built images for production environments.
Below is an example docker-compose.yml file, where each service is referenced by its repository URL:
version: '3.8'
services:
# Public-facing Ensemble Service
ensemble-service:
build:
context: https://github.com/Contexter/Ensemble-Service.git
ports:
- "8000:8000" # Expose Ensemble Service on port 8000
environment:
- INTERNAL_ACTION_SERVICE_URL=http://action-service:8001
- INTERNAL_CHARACTER_SERVICE_URL=http://character-service:8002
- INTERNAL_CENTRAL_SEQUENCE_SERVICE_URL=http://central-sequence-service:8003
- INTERNAL_CORE_SCRIPT_SERVICE_URL=http://core-script-service:8004
- INTERNAL_PARAPHRASE_SERVICE_URL=http://paraphrase-service:8005
- INTERNAL_PERFORMER_SERVICE_URL=http://performer-service:8006
- INTERNAL_SESSION_CONTEXT_SERVICE_URL=http://session-context-service:8007
- INTERNAL_SPOKEN_WORD_SERVICE_URL=http://spoken-word-service:8008
- INTERNAL_STORY_FACTORY_SERVICE_URL=http://story-factory-service:8009
depends_on:
- action-service
- character-service
- central-sequence-service
- core-script-service
- paraphrase-service
- performer-service
- session-context-service
- spoken-word-service
- story-factory-service
# Internal Services
action-service:
build:
context: https://github.com/Contexter/Action-Service.git
expose:
- "8001"
character-service:
build:
context: https://github.com/Contexter/Character-Service.git
expose:
- "8002"
central-sequence-service:
build:
context: https://github.com/Contexter/Central-Sequence-Service.git
expose:
- "8003"
core-script-service:
build:
context: https://github.com/Contexter/Core-Script-Managment-Service.git
expose:
- "8004"
paraphrase-service:
build:
context: https://github.com/Contexter/Paraphrase-Service.git
expose:
- "8005"
performer-service:
build:
context: https://github.com/Contexter/Performer-Service.git
expose:
- "8006"
session-context-service:
build:
context: https://github.com/Contexter/Session-And-Context-Service.git
expose:
- "8007"
spoken-word-service:
build:
context: https://github.com/Contexter/Spoken-Word-Service.git
expose:
- "8008"
story-factory-service:
build:
context: https://github.com/Contexter/Story-Factory-Service.git
expose:
- "8009"To ensure consistency and transparency across all FountainAI services, each microservice's OpenAPI specification includes metadata about the service’s repository. This makes the OpenAPI specification the single source of truth for
the service.
The OpenAPI specification of each service includes a direct link to the service’s GitHub repository using the x-repo-url custom field.
openapi: 3.1.0
info:
title: FountainAI Action Service
description: >
The FountainAI Action Service manages actions associated with characters and spoken words.
version: 1.0.0
x-repo-url: https://github.com/Contexter/Action-ServiceBy centralizing repository information in the OpenAPI spec:
- Developers and automation tools can easily access the latest codebase.
- The OpenAPI spec acts as a single source of truth, ensuring consistency across all resources.
All FountainAI microservices follow this format for their OpenAPI specifications, ensuring uniformity. This makes it easy to navigate between code and documentation directly from the OpenAPI spec.
The FountainAI deployment process includes the following steps:
Clone this repository, which contains the centralized Docker Compose setup:
git clone https://github.com/Contexter/FountainAI-Compose.git
cd FountainAI-ComposeRun the following command to build and start all services as specified in docker-compose.yml:
docker-compose up --build -dThis command will:
- Build and run all services defined in
docker-compose.yml. - Set up the Docker network, enabling internal communication between services.
- Start the public-facing Ensemble Service on
localhost:8000.
- Access the Ensemble Service at http://localhost:8000 to verify that it routes requests to internal services.
- Run tests on the centralized setup as needed to validate full-stack functionality.
When testing or deployment is complete, you can stop the Docker containers with:
docker-compose downFor production, replace build directives with image directives, referencing Docker images from a container registry (e.g., Docker Hub) rather than building directly from GitHub repositories. This improves stability and speed during deployment.
The FountainAI-Compose repository serves as the centralized integration point for all FountainAI services, supporting a seamless transition from local, independent development of each microservice to full-stack deployment. By centralizing the Docker Compose setup here, we enable a modular, scalable approach where each service repository remains focused on self-contained Dockerized development, while this repository manages inter-service dependencies, configuration, and deployment.
This guide ensures that FountainAI is:
- Modular: Each service can be developed and tested independently.
- Unified: All services are integrated for full-stack development and testing in FountainAI-Compose.
- Production-Ready: Supports transition to production with pre-built Docker images and a single, streamlined deployment configuration.
- OpenAPI-Driven: Each service’s OpenAPI specification acts as the single source of truth, centralizing repository links for consistency across the ecosystem.