README.md
contains:
- Setup instructions
- Project details (tools, project structure)
PROGRESS.md
includes:
- Task at hand
- Some context
- Extra challenges
- Next steps
Note: Read the PROGRESS.md
file every time you switch branches.
- Git (to clone the repository)
- Docker Desktop
- Python (if not using Docker)
- GroqCloud account and API key (Free access to LLama 3)
This workshop will be easier for you if you're familiar with:
- Django
- HTMX
- OpenAI's API
- TailwindCSS
I tried to keep things simple, but there are lots to cover in only 50 minutes.
- Clone the repository:
git clone https://github.com/scriptogre/functional-chatbots.git
- Rename
.env.example
to.env
- Update
GROQ_API_KEY
with your GroqCloud API key - Run
docker compose up
to start the project - Open browser at
http://localhost:8000
Warning: This project is unconventional. Enjoy the ride!
Wait, what?! You want to render templates with Django-Ninja?
Why not?
- It's less verbose, with intuitive syntax inspired by FastAPI.
- It's more performant, thanks to being built on top of Starlette.
- It's still Django, so we can benefit from the included batteries when needed.
Besides, it uses Pydantic.
Instructor also uses Pydantic. This will come in handy later.
We'll use htmx to easily add interactivity to our project, like updating chat messages, or creating/updating/deleting pizza orders - without writing any JavaScript.
Grug from The Grug Brained Developer by Carson Gross (creator of htmx). Love the article.
complexity bad
We'll use JinjaX in our templates, an experimental project that's essentially Jinja2 with JSX-like syntax for components.
Because paired with htmx, we can do stuff like:
<ChatContainer
hx-get="/chat-messages"
hx-trigger="chatMessagesUpdated from:body"
>
<ChatMessage role="user">
I personally love the simplicity of templates with JinjaX.
</ChatMessage>
</ChatContainer>
Which is a joy to read and write.
Most importantly, it enables keeping behaviour (hx-*
attributes) explicit, while abstracting structure.
I've written a blog about JinjaX, if you're curious.
Similar projects include:
django-components
slippers
django-template-partials
(Hi Carlton! Love your projects 💚)
We'll use TailwindCSS for styling.
Because paired with JinjaX, we can do stuff like:
<ChatContainer class="group">
...
<ChatPlaceholder class="group-has-[.chat-message]:hidden" />
</ChatContainer>
Which is very expressive.
We can hide classes that are part of the component, while keeping context-specific classes visible.
By creating custom variants (like hover:
or dark:
), we can also do stuff like this:
<!-- This shows only when assistant generates responses -->
<ChatMessage class="hidden htmx-request-on-[#trigger-assistant]:block">
Typing...
</ChatMessage>
CSS is very powerful nowadays.
Smooth transitions, animations, and even conditional displaying can be achieved with it (e.g. group-has-[.chat-message]:hidden
).
TailwindCSS makes it easier to harness that power.
Tools | Less JavaScript |
---|---|
htmx | 80% |
htmx + TailwindCSS | 99% |
We'll use GroqCloud's free API to interact with LLama 3 70B, an open-source model.
Faster than any other LLM API I've used.
Other services like OpenAI, Anthropic, or Google Gemini were paid. I didn't want you to pay for a workshop.
Their free tier offers 30 requests per minute. That's 1 request every 2 seconds.
Instructor is a Python library that does the heavy lifting for getting structured responses from LLMs.
It has support for Groq's API, and it will save us from a lot of effort (and boilerplate).
This was my first workshop.
I'd love to her your thoughts. I'd appreciate to know whether I should pursue this further or stop wasting people's time.
...please 👉👈