langchain_playground

Playing around with langchain.

Dev Setup & Running

I use a couple of windows with a standard command (this all assumes you are in the project folder):

# Follow the application log 
less +F data/logs/app.log

# Follow the 
less +F ~/.ollama/logs/server.log

To run a script I use the module syntax: python -m rag.simple_chat.

History

2024-04-20 Configurable vectorestores

  • Vectorestores are now in config file
  • Config of a configuration contains also the model and the directory with the data to ingest
  • Use a

2024-04-20 Debugging LangChain chains

  • Direct logging into a file so it doesn't interfere with my dialog in the terminal.

2024-04-06 Doing RAG tutorial

2024-04-08 Ingesting

  • Trying to ingest the A12 documentation, ~ 3'800 markdown docs.
  • Trying to ingest with llama2:latest. Takes long:
    • 3456/19666 [30:02<2:23:58, 1.88it/s]
    • Expected time > 3 hours
  • With default embedding model ()
    • 1224/19666 [10:28<2:46:50, 1.84it/s]
    • Expected time > 3 hours
  • With embedding model nomic-embed-text
    • 15 minutes! 🄰

Tech reading