Important
This project is a work in progress and is not yet operational.
The GPT Action R Interpreter project is designed as a specialized component to enhance ChatGPT's capabilities in providing high-quality R code solutions. This project establishes a unique interface, solely intended to be invoked by a GPT Action, for executing and evaluating R code. It aims to mimic the functionality of ChatGPT's Python Code Interpreter, but with a focus on the R programming language, widely recognized for its application in statistics and data analysis.
- GPT Action Integration: Seamlessly integrated with a custom ChatGPT GPT Action, allowing the GPT model to send R code snippets to the interpreter for execution and evaluation.
- R Code Evaluation: Features an R interpreter, capable of executing R code and returning results, key to evaluating the R solutions generated by ChatGPT.
- FastAPI Backend: Built on FastAPI for high-performance, scalability, and efficiency in handling requests from the GPT model.
- Enhance ChatGPT's R Proficiency: Aims to help ChatGPT improve its ability to generate high-quality R solutions.
- Real-Time Code Execution: Offers a real-time execution environment for R code, allowing ChatGPT to validate its responses dynamically.
- Feedback Loop for AI Improvement: Creates a feedback mechanism where the results of R code execution inform and enhance ChatGPT's future responses in R.
- Quality Control for R Solutions: Primary use case is to evaluate ChatGPT's R code solutions, ensuring they are correct, efficient, and applicable.
- AI Training and Improvement: Feedback from the R interpreter can be used to fine-tune ChatGPT's approach to solving R-related queries.
Looking ahead, the project aspires to expand its capabilities, including handling more complex R scenarios and integrating more deeply with ChatGPT's learning algorithms. The goal is to continually elevate the quality of R code solutions provided by ChatGPT, making it a more robust and reliable tool for users seeking assistance with R programming challenges.
The renv package is used to manage the R dependencies of this project.
To add a new R package to this project, launch a local R session and run the following:
renv::install("<pkg-name>")
renv::snapshot()
This will download the package to the local cache, but more importantly, it will also update the renv.lock
file after
renv::snapshot()
is called. Once the Docker image is rebuilt, the new container will now install any new dependencies within renv.lock
.
Note: the version of the local R installation must match the version pinned within the Dockerfile (R-4.2.1) for the updated renv.lock
file to be installed correctly when the new image is built.
Docker provides a convenient and consistent environment for your application, ensuring it runs the same way on every machine. In the context of this project, Docker is particularly useful for local development. Here’s why:
Using Docker locally allows developers to replicate the production environment on their own machines. This ensures that everyone on the team works in a consistent environment, minimizing the "it works on my machine" problem. However, for production deployment, this project leverages automated workflows (as detailed in the deployment section), which handle the complexities and nuances of production environments more effectively.
To build a Docker image of the application, use the following command:
docker build -t app .
Once the image is built, you can run it as a container like so:
docker run -p 8000:8000 app
To access the shell of a running container, use the following command:
docker exec -t [container_name_or_id] bash
This can be useful to test that the environment is configured correctly.
Deployment is managed via GitHub Actions. Each time a new branch is merged into main
, this application will automatically be deployed to a DigitalOcean droplet. Please refer to .github/workflows/deploy.yml
for more details on this process.
To facilitate the deployment of this application, a DockerHub account is required. DockerHub serves as a cloud-based registry service to host Docker images, essential for the easy and efficient deployment of your application to DigitalOcean.
- Accessibility: Centralized hosting for Docker images, simplifying the deployment process.
- Version Control: Maintain and manage different versions of your Docker images.
- CI/CD Integration: Allows for automated building, testing, and deployment of images in your CI/CD pipeline.
- Account Creation: Sign up at DockerHub.
- Repository Setup: Create a new repository on DockerHub for storing your Docker images.
- CI/CD Integration: Connect your DockerHub account with your CI/CD tools for automated image handling.
- Building Images: Build your Docker image and tag it appropriately (
username/repository:tag
). - Pushing Images: Use
docker push
to upload your Docker image to DockerHub. - Deployment: The CI/CD pipeline automates the deployment process by pulling the latest image from DockerHub to your DigitalOcean droplet.
Remember to keep your DockerHub credentials secure and handy for use in CI/CD integration and Docker operations.
For seamless deployment and management of Docker containers on your DigitalOcean droplet, ensure that your user is correctly configured to interact with Docker:
-
Add User to Docker Group: Run the following command on your droplet to add your user to the Docker group. This grants the user permission to execute Docker commands without needing
sudo
.sudo usermod -aG docker your-username
-
Generate SSH Key Pair (if not already done):
- Use
ssh-keygen
to generate a new key pair. This is done on your local machine or any secure environment. - Remember, your private key (
id_rsa
) is confidential and must not be exposed.
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
- Use
-
Add the Public Key to your DigitalOcean Droplet:
- Log in to your DigitalOcean account.
- Access the droplet you want to deploy to.
- Add the public key (
id_rsa
) to your droplet's authorized keys. This can be done through the DigitalOcean control panel or by manually adding it to the~/.ssh/authorized_keys
file on your droplet.
-
Test SSH Connection Manually:
- Log in to your DigitalOcean droplet via ssh to confirm the configuration was successful.
-
Configure GitHub Actions Secret:
- Go to your GitHub repository's settings.
- Add a new secret named
DROPLET_SSH_KEY
. - The value of this secret should be the entire content of your private key (
id_rsa
). Ensure you include the entire key, including any headers and footers (like-----BEGIN RSA PRIVATE KEY-----
, and-----END RSA PRIVATE KEY-----
).
-
GitHub Actions Workflow:
- In your GitHub Actions workflow file, use the
DROPLET_SSH_KEY
secret to establish an SSH connection to your droplet. - This workflow uses the
appleboy/ssh-action@master
to execute commands on your droplet via SSH. - The private key (referenced from the secret
DROPLET_SSH_KEY
) is used to authenticate the GitHub Actions runner with your droplet.
- In your GitHub Actions workflow file, use the
The following GitHub secrets must be defined in order for this CI/CD process to work:
DOCKERHUB_USERNAME
DOCKERHUB_TOKEN
DROPLET_IP
DROPLET_SSH_KEY
DROPLET_USER
TODO
- DNS Config
- Reverse Proxy via apache2 or nginx?
- Testing