This repository contains a collection of shell scripts designed to automate the installation and configuration of popular generative AI tools and workflow automation platforms on Ubuntu. These scripts streamline the setup process, making it easy to deploy and run cutting-edge AI tools and interfaces.
Visit my blog AI Box, where I write about AI, generative AI tools, and their applications in the real world. It’s a hub for AI enthusiasts looking to explore and understand the latest advancements in artificial intelligence.
While Docker is widely recognized as a powerful and flexible tool for containerized deployments, only the install_ollama_web_ui.sh
script uses Docker. This is because, for the other tools, Docker's internal network configuration posed challenges in making them accessible over the local network to other users and PCs for the autor.
For tools like Flowise, n8n, and Ollama, the scripts use local installations without Docker to ensure they are exposed directly to your intranet, providing seamless access for all devices within the same local network.
Docker is still leveraged for the Ollama Web UI installation, where its features align perfectly with the requirements of that specific tool.
If you'd like to adapt these scripts to run all tools in Docker containers, I would be thrilled to see your contributions and ideas!
- Purpose: Prepares your Ubuntu system with essential tools and dependencies for generative AI tools and development.
- Hint: The installation methods used in these scripts assume that your system is equipped with a NVIDIA RTX graphics card like a RTX 4090, A6000 or even better.
- Details:
- Updates and upgrades system packages.
- Installs common utilities and developer tools like
mc
andcurl
. - Ensures the system is ready for advanced AI installations and configurations.
- Purpose: Installs Ollama, a versatile platform for running and managing generative AI language models locally.
- Details:
- Installs the latest version of Ollama, preparing your system to leverage advanced AI models like LLaMA.
- Enables local deployment of AI models without relying on cloud infrastructure.
- Purpose: Deploys the Open Web UI for Ollama, providing a browser-based interface for interacting with generative AI models.
- Details:
- Installs and configures the Open Web UI with support for Ollama models.
- Makes the web interface accessible over the local network for seamless AI model management and interaction.
- Purpose: Installs and configures n8n, a powerful workflow automation tool with integrations for AI-based solutions.
- Details:
- Installs n8n globally using
npm
. - Sets up n8n as a systemd service for persistent execution.
- Secures access with basic authentication and enables network-wide availability for AI-enhanced automation.
- Installs n8n globally using
- Purpose: Installs and configures Flowise, a visual interface for building and managing AI workflows.
- Details:
- Installs Flowise globally using
npm
. - Configures it as a systemd service for automatic startup.
- Runs Flowise on a specified port (
3001
) and makes it accessible over the local network.
- Installs Flowise globally using
Purpose: Installs OpenAI's Whisper project for offline speech-to-text transcription.
Details:
- Sets up Whisper in an isolated Python virtual environment for offline usage.
- Downloads and installs dependencies locally.
- Fetches and configures the Whisper base model for transcription tasks.
- Ensures the system is ready for running Whisper without requiring an internet connection.
Purpose: Installs Automatic1111 for picture generating.
Details:
- Sets up Automatic1111 in an isolated Python virtual environment with Python 3.11.
- Downloads and installs dependencies locally.
- Fetches and configures a stable diffusion base model for picture generation.
- Ensures the system is ready for running Automatic1111 without requiring an internet connection or starting Automatic1111 manually.
This script automates the deployment of Crawl4AI on Ubuntu servers using Docker and Docker Compose. It performs the following tasks:
System Setup: Updates your system and installs necessary tools such as curl, git, Docker, and Docker Compose.
Repository Management: Clones (or updates) the official Crawl4AI repository into /opt/crawl4ai, ensuring all required files (e.g., Dockerfile, requirements.txt) are present.
Configuration Patching: Overrides the default docker-compose.yml with a patched version that removes the conflicting port mapping for 8080 (used by other tools like Ollama Web UI) so that Crawl4AI runs exclusively on port 11235 (and other necessary ports).
Container Deployment: Builds the Docker image for AMD64 using the local-amd64 profile and starts the container, making Crawl4AI accessible at http://:11235.
How to Update Crawl4AI: To update Crawl4AI to the latest version, simply re-run the script—it will pull the latest changes from the repository and rebuild the container automatically.
Usage: Run the following command in your terminal on your Ubuntu server sudo ./install_crawl4AI.sh
This script is included alongside other installation scripts in this repository, providing an easy, one-command setup for generative AI tools.
- Clone the repository to your Ubuntu system:
git clone https://github.com/<your-username>/<repository-name>.git cd <repository-name>
- Make the scripts executable:
chmod +x *.sh
- Run the desired script:
./install_Flowise.sh
After modifying or updating any of the service-related scripts (e.g., changing ports or configurations), it’s important to ensure the changes are applied to the respective systemd service. Use the following commands:
- Reload the systemd manager configuration:
sudo systemctl daemon-reload
This ensures that systemd recognizes any changes made to the service files.
-
Restart the specific service:
sudo systemctl restart flowise
Replace flowise with the name of the service you are restarting (e.g., n8n or ollama).
-
Check the status of the service:
sudo systemctl status flowise
This displays the current status of the service, including whether it is running, any errors encountered, and recent log output.
By following these steps, you can ensure that the latest changes to your service configurations take effect.
Contributions are welcome! If you discover any issues or have ideas for improvement, feel free to open an issue or submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for details.