Repository short link: timw.info/elements
🌐 Primary Contact: Visit techtrainertim.com for all my social links and latest content!
- Awesome Prompt Engineering - Comprehensive collection of PE resources
- Prompting Guide Datasets - Curated datasets for prompt engineering
- Microsoft PromptBase - Microsoft's official prompt engineering patterns
- OpenAI Prompt Engineering Guide - Official OpenAI best practices
- Microsoft AI Builder Prompts - Microsoft's prompt design patterns
- Google AI Prompt Best Practices - Google's guidelines for effective prompting
- AWS Prompt Engineering Guide - Amazon's approach to prompt engineering
- Perplexity - AI-powered search and discovery
- Cursor - AI-enhanced development environment
- Kagi Universal Summarizer - Advanced text summarization tool
- OpenAI Cookbook - Practical prompt engineering recipes
- Anthropic Claude Documentation - Advanced prompting techniques
- Prompt Engineering Guide - Comprehensive learning resource
- Maintain at least 2 "daily driver" LLMs at a paid tier for A/B testing (fault tolerance and groundedness)
- Never provide personal or confidential information to public/free AIs—ensure privacy by understanding chat storage, usage stats, and licensing policies
- Speak to the LLM in ways most comfortable to you (voice, text, image) and take advantage of its multi-modal capabilities
- Apply a stream-of-consciousness technique to generate prompts, even with rough spelling/grammar, including key information like who, what, when, where, why, and how
- Think procedurally and in a step-by-step manner to help the AI break down complex topics
- Optimize custom instructions and prompts ("meta prompting"), including asking the AI to summarize or focus its responses
- Use system prompts and meta prompts to direct and focus the LLM's capabilities
- Be aware of potential signs of amnesia or hallucination in AI responses; have a backup plan (such as testing with multiple LLMs)
- Accept that you'll never be fully caught up—embrace exploration, questioning, and constant testing
- Build cognitive "muscle memory" with AI by practicing prompt refinement and cross-model comparisons
- Remember to attribute AI-enriched content where relevant
- Understand the unique strengths and behaviors of each LLM and leverage them strategically in multi-chat sessions
- "LLM Pillar Jumping": Use insights from one LLM session to support or refine another
- Consider "A/B testing" LLMs against each other for more grounded and reliable answers
- Get vulnerable with your AI (in trusted, secure sessions) to receive maximally personalized results—the more context you provide about your unique situation, the more tailored and valuable the response
- Leverage "meta-prompting" by asking the AI to craft system messages, design prompts, and optimize instructions—let the AI help you become better at using AI
- Internet Archive (search for the books here)
- The Elements of Style
- Editor-Proof Your Writing
- Yahoo Style Guide
- Hugging Face
- Ollama
- Kagi
- You.com
- GitHub Models
- GitHub Copilot Extensions
- Microsoft Responsible AI Standards
- OpenAI Safety & Responsibility
- Google AI Principles
- AWS Responsible AI
- Anthropic AI Safety