Back to Lessons
Integrated Course

Advanced Course

Reduce hallucinations, design complex prompts, and chain tools/search for reliable workflows

Chapter 8: Avoiding Hallucinations

Learning Content

Claude wants to be helpful and will fabricate details when pressured. This chapter shows how to give the model explicit permission to abstain, how to make it collect evidence before answering, and how to tune parameters so accuracy beats creativity. Combine these steps with clear structure so every factual reply is grounded in verifiable text.

Key Techniques

Give Claude an explicit out so refusing is preferable to guessing.

Force an evidence-first workflow with XML tags or numbered steps.

Require citations or quote IDs in every factual answer.

Control randomness (temperature, top_p) when accuracy matters.

Common Pitfalls

Burying the user question above distractor text without restating it at the end.

Letting Claude answer without quoting the provided document.

Failing to explain what to do when no evidence is found.

Prompt Examples

Add a rule such as “If you are not certain, reply with ‘I don't know’ and explain what would be needed to know.”
Split the task: first extract supporting sentences inside 〈evidence〉 tags, then answer using only those sentences.
Lower temperature or ask Claude to double-check numbers before finalizing the response.
1 / 4
Advanced Prompt Engineering - Complex AI Interaction Techniques | 12Factor.me