/back-end

Hyper Loom is a fictional world browsing application

Primary LanguagePythonMIT LicenseMIT

Contributors Forks Stargazers Issues MIT License


Logo

HyperLoom

Deployed Front end · Deployed Back end · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Database Schema
  4. Endpoints
  5. Testing
  6. License
  7. Contact

About The Project

Hyperloom Home Page

Hyperloom is a web based application that leverages ChatGPT and Midjourney to provide users new & expansive fictional worlds. Users are able to browse previously generated worlds or create new ones with the click of a button. Hyperloom aims to foster the imagination and excitement of its users while providing them with high resolution images to give the sense of an immersive experience.

Hyperloom was built with a separate frontend and backend. The backend API service exposes RESTful endpoints returning JSON data for the frontend to consume. The backend seeds its database using a script for generating textual descriptions of worlds via the ChatGPT API. The ChatGPT API creates the AI-generated textual metadata for an imaginary world. This metadata is then also used to create the prompt that is sent to the Midjourney API to create AI-generated images based off of those descriptions.

(back to top)

Built With

  • Python
  • Django
  • Django Rest Framework
  • Pytest
  • SQLite
  • GitHub Actions
  • Heroku

Integrations

  • OpenAI API
  • Midjourney API

(back to top)

Getting Started

To get a local copy of Hyperloom up and running, follow these simple example steps for the backend.

These instructions are only for the backend. To setup the frontend locally, follow the instructions in the frontend repository's README.md file.

Prerequisites

Installation

  1. Create a virtual environment
    python -m venv hyperloom
  2. Activate the virtual environment
    source hyperloom/bin/activate
  3. Clone the repo inside the virtual environment directory
    git clone https://github.com/The-Never-Ending_Story/back-end.git
  4. Install python packages from requirements.txt
    python -m pip install -r requirements.txt
  5. Make migrations
    python manage.py makemigrations
  6. Run migrations
    python manage.py migrate
  7. Create an admin super user with your own username and password
    python manage.py createsuperuser
  8. Run the server
    python manage.py runserver
  9. Visit http://localhost:8000

Additional Instructions

  1. Get a OpenAI API key
  2. Get a Midjourney API key
  3. Create .env file and setup environment variables for both the OpenAI API key and the Midjourney API key
# .env file
OPENAI_API_KEY = <YOUR OPENAI API KEY>
MIDJ_API_KEY = <YOUR MIDJOURNEY API KEY>
  1. Run the script for the world generator service
python services/world_generator.py
  1. Deploy to Heroku

(back to top)

Database Schema

Database Schema

Endpoints

Prefix all endpoints with the deployed backend API domain: https://hyperloom-d209dae18b26.herokuapp.com

API documentation is done using Swagger (linked below), following the OpenAPI Specification from the OpenAPI Initiative. API responses return JSON.

Hyperloom API Documentation in Swagger

Testing

To run the Pytest test suite, follow the following steps:

  1. Activate your virtual environment
  2. Navigate to the application root directory where the pytest.ini file is
  3. Run:
    pytest
  4. Open the Pytest coverage report with:
    open htmlcov/index.html

Use the built-in Python Debugger (PDB) for debugging by adding the following line of code to set a breakpoint: a

import pdb;pdb.set_trace()

Coverage report:

Pytest Coverage Report 1 Pytest Coverage Report 2

Roadmap

Potential features, functionality, or refactors for the future:

  • Add a background worker to Heroku to continuously run the world generator script to generate and seed more data in the database
  • Additional tests for endpoints
  • User features to save and share favorite worlds
  • Search features to find worlds

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

  • Andrew Bowman: LinkedIn GitHub
  • Sean Cowans: LinkedIn GitHub
  • Branden Ge: LinkedIn GitHub

Special thanks to Brian Zanti, our instructor and project manager

(back to top)