Just a simple repo that covers everything I learn about LLMs (hands-on only)
- LLM JSON Fixer: Repairs broken JSON responses from any LLM.
- Evaluation Metrics: To evaluate the GenAI text vs Human generated text. Metrics like ROGUE, BLEU, BERTScore etc. are used.
- Chunk Summarizer: Summarize text if it exceeds the context-window of a LLM.