Happy to see that our Prompt Engineering for LLMs course was selected as the course of the week by Maven. The course has now been completed by 300+ students. I keep improving and updating the course on every iteration. This is an advanced, technical, and hands-on course for builders. For cohort 8, which starts next week, we will cover the following: • Taxonomy of Prompting Techniques • Tactics to Improve Reliability • Structuring LLM Outputs • Zero-shot Prompting • Few-shot In-Context Learning • Chain of Thought Prompting • Self-Reflection & Self-Consistency • ReAcT Prompting Framework • Retrieval Augmented Generation (RAG) • Fine-Tuning & RLHF • Function Calling & Tool Usage • LLM-Powered Agents • LLM Evaluation & Judge LLMs • AI Safety & Moderation Tools • Adversarial Prompting (Jailbreaking and Prompt Injections) • Common Real-World Use Cases of LLMs • Prompt Engineering for models like GPT-3.5/4, Mixtral, Gemini, and others ... and much more This is not a course for everyone. It's meant for folks interested in building reliable LLM applications and looking to optimize their LLM use cases.
Enroll here if you would like to join us: maven.com/dair-ai/prompt… Feel free to DM me with questions.
@omarsar0 @omarsar0 Which tools do you use to test and manage prompts? You can give Prompt Tester a shot. It helps you: - Bulk-test prompts in 1-click - Rate & compare prompt outputs/models side-by-side - Log actual user queries, and more Try for free👇 prompt-tester.com