Practicing writing Python code for generative AI
yt_streamlit: https://www.youtube.com/watch?v=RV_MihEQ4BA
pandasai: https://www.youtube.com/watch?v=BtmMNZLxbuI
gpt-engineer: https://github.com/AntonOsika/gpt-engineer https://www.youtube.com/watch?v=FPZONhA0C60
generate images with openai: https://www.geeksforgeeks.org/generate-images-with-openai-in-python/
extracting data from pdf: https://www.freecodecamp.org/news/extract-data-from-pdf-files-with-python/
How to setup and use Ollama:
- Go to https://ollama.com/download and download the installer applicable
- Run the installer with default settings
- Once installed, it should available as a command in the terminal (or windows command prompt for windows)
- Type: >ollama (enter). This should display the various options available
- Type: >ollama run phi3 (enter). This will download and install the phi3 model.
- Type: >ollama run mxbai-embed-large (enter). This will download and install the mxbai-embed-large model.
- Install python version 3.11.9 (https://www.python.org/downloads/release/python-3119/) [Version 12 has some changes that may cause issues with the current code]
- Setup python at the 'PATH' environment variable (in case of windows. May not be needed for mac/linux)
- Make sure that python is installed
python (enter) This is provide python environment details and options available
- Create a project directory and create a virtual environment at that path:
<directory_path>python -m venv /path/to/new/virtual/environment(for example, llm) (enter)
- Copy the file 'ollama_embeddings.py' to a desired project path and open the same in a terminal.
- Create a new virtual environment at the project path:
python -m venv /path/to/new/virtual/environment (enter)
- Install the requirements files at the project path within the virtual environment: (llm)> pip install -r /path/to/requirements.txt
- Inside the project path, create a source (src) folder to store the context/reference data file and copy the file there. {llm)> mkdir src (enter)
- Execute the python code:
python ollama_embeddings.py
- If everything was setup correctly, it will generate embeddings at project_path\embeddings\ location. Also the llm chat will activate and ready to be asked a question ("What do you want to know? -> ")
- Ask a relevant questions and the llm will respond.
- Depending on the system, it may take some time to both generate embeddings and also responding to the question.