/llm-prompt-architecture-evaluation

This thesis evaluates Large Language Models (LLMs) using the Chain-of-Thought (CoT) prompting architecture. It employs Iterative Chain-of-Thought (Iter CoT) for analysis, utilizing various LLMs including Alpaca LoRA 13B and 30B. Results show improved accuracy with Iter CoT, notably with Alpaca LoRA 13B, outperforming ChatGPT models.

Primary LanguageJupyter Notebook

Stargazers

No one’s star this repository yet.