/llm-quickstart

Minimal examples for tinkering around with LLMs.

Primary LanguageJupyter NotebookMIT LicenseMIT

LLM Quickstart

A few minimal examples for tinkering around with LLMs.

  • Docker: Deploys a server to run Llama Index locally.

    • Be sure Docker is running (in Windows, launch Docker Desktop).
    • To deploy: docker compose up -d.
    • To turn down: docker compose down.
    • To recreate (sometimes necessary if you modify the Dockerfile) docker-compose up --build --force-recreate.
  • API: Examples for calling to OpenAI API and HuggingFace's InferenceAPI.

Examples and Use Cases

Other alternatives to running local models