Private GPT is an AI-powered conversational agent that can answer questions based on text data extracted from PDF files. Follow the instructions below to install and run the Python script.
-
Clone the repository:
git clone https://github.com/wilramdhani/private-gpt.git
-
Navigate to the project directory:
cd private-gpt
-
Create and activate a virtual environment (optional but recommended):
python3 -m venv venv source venv/bin/activate # For Linux/Mac # Or venv\Scripts\activate # For Windows
-
Install the required dependencies using pip:
pip install -r requirements.txt
-
Create a
.env
file in the project directory and add the following content:LLM_URL=xxxxxxx
Replace the URL with the actual URL of your Language Model.
-
Ensure that you have placed the PDF files you want to process in a directory.
-
Open a terminal or command prompt.
-
Navigate to the project directory:
cd path/to/private-gpt
-
Run the Python script:
python private-gpt.py
-
Enter your questions when prompted. The script will provide responses based on the content of the PDF files.
-
Type
exit
orquit
to stop the script.
- You can customize the behavior of the chatbot by modifying the Python script (
private-gpt.py
) according to your requirements. - Remember to update the LLM URL in the
.env
file if necessary. - If you encounter any issues or have questions, feel free to open an issue on the GitHub repository.