Prompt Engineering (2 days)

Timings:

  • 9:30-11 Session 1
  • 11-11.15 Coffee
  • 11.15-12.45 Session 2
  • 12.45-1.45 Lunch
  • 1.45-3.15 Session 3
  • 3.15-3.30 Tea
  • 3.30-4.30 Session 4

Aim

The aim of the workshop is to gain a greater understanding of designing and crafting prompts for Large Language Model (LLMs) in order to improve their productivity.

Effective prompt engineering involves designing prompts that are specific, clear, and unambiguous, while also being diverse enough to capture a range of possible inputs. This requires a deep understanding of the model's capabilities and limitations, as well as the linguistic nuances of the target language and domain.

  1. Introduction

    • LLMs description and origin
    • Interacting with LLMs
    • LLMs landscape: models, UIs, and bespoke apps
  2. Setting Up Local Development Environment

    • Required tools and libraries
    • Installing and configuring LLM frameworks locally
    • Running and testing your prompts locally
    • Troubleshooting common issues
  3. Understanding LLM Parameters

    • Tokens: what they are and how they work
    • Temperature: controlling randomness
    • Other parameters: max tokens, top-p, etc.
    • Practical exercises with parameter tuning
  4. Prompt Engineering

    • Choosing the right AI tool for your use case
    • Effective communication with LLMs
    • Making your prompts user-proof
  5. Personal Productivity

    • Reducing your email handling mental load
    • TL;DR automation
    • Preparing presentations
    • Other use cases in tech
  6. Code Productivity

    • Adding docstrings and comments
    • Creating documentation for your projects
    • Unit testing
    • Pre-debugging
  7. Developing with LLM APIs

    • API setup and budgeting
    • Current integrations examples
    • Integrating LLMs with the rest of your stack
  8. Security in Prompt Engineering

    • OWASP 10 for LLM
  9. The Future of AI

    • Recent developments
    • Coming soon
    • Threats and opportunities
    • Your next steps