Using : llama.cpp via llama-cpp-python
-
Install VSCode: Download VSCode.
-
Install Git: Download Git.
-
Clone the Repository:
- Open VSCode.
- Open the Command Palette (
Ctrl + Shift + P
orCmd + Shift + P
). - Type
Git: Clone
and enter:https://github.com/DrDavidL/local-llm.git
- Select a folder to clone to.
- Open the cloned repository in VSCode.
-
Set Up the Python Virtual Environment:
- Open the integrated terminal in VSCode (
Ctrl + Shift + ~
orCmd + Shift + ~
). - Navigate to the project directory:
cd local-llm
- Create a virtual environment:
python -m venv venv
- Activate the virtual environment:
- Windows:
.\venv\Scripts\activate
- MacOS/Linux:
source venv/bin/activate
- Windows:
- Install required packages:
pip install -r requirements.txt
- Open the integrated terminal in VSCode (
-
Open the IPython (Jupyter) notebook: Open the notebook file (
local.ipynb
). -
Select the Python Interpreter: Ensure VSCode is using the Python interpreter from your virtual environment. When you try to run a cell you'll be prompted to choose.
- Select the interpreter from the
venv
directory (e.g.,./venv/bin/python
for MacOS/Linux or.\venv\Scripts\python.exe
for Windows).
- Select the interpreter from the
If questions: cloning a GitHub repository 🛠 setting up a Python virtual environment
More info: VSCode Python tutorial 💻 virtual environments best practices
- Download one or two LLMs from HuggingFace
Options that will run on machines that are even a few years old. After you download, replace the paths used in the notebook cells with the paths to your own downloaded GGUF formatted files.
-
https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-GGUF
-
https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF
- Run the Notebook cells
- Open your output file, 'output.md', by right clicking and choosing preview. Move to vertical pain to the right of your notebook. This way you can see the output as it emerges!
- Run the cells of your notebook.
Congrats! You're interacting with an AI model running locally on your device!