/elements

The Elements of Prompt Engineering Live Training Course

Primary LanguageJupyter NotebookMIT LicenseMIT

The Elements of Prompt Engineering Live Online Training Course

Thumbnail

Link Checker GitHub last commit GitHub license

Repository short link: timw.info/elements

📬 Connect with Tim

🌐 Primary Contact: Visit techtrainertim.com for all my social links and latest content!

Additional Contact Methods

Latest links from Tim

🎯 Essential Resources

🔧 Vendor-Specific Guides

🛠️ Recommended Tools

📚 Learning Resources

Tim's LLM Prompting Guidance

  • Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness)
  • Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies
  • Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities
  • Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how
  • Think procedurally and in a step-by-step manner to help the AI break down complex topics
  • Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses
  • Use system prompts and meta prompts to direct and focus the LLM's capabilities
  • Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs)
  • Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing
  • Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons
  • Remember to attribute AI-enriched content where relevant
  • Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions
  • "LLM Pillar Jumping": Use insights from one LLM session to support or refine another
  • Consider "A/B testing" LLMs against each other for more grounded and reliable answers
  • Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response
  • Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI

Tim's Essential Tech Writing Bookshelf

Prompting guidance

OpenAI

Microsoft

Google

Amazon

Third Party

LLM galleries

Community and third-party resources

LLM vendors responsible AI principles

Additional LLM Resources

Search Tools

Development Tools

Learning Resources

AI Safety & Ethics