Skip to main content
Drainpipe Knowledge Base

Search for answers or browse our knowledge base.

< All Topics
Print

What is a Instruction Inconsistency AI Hallucination?

An Instruction Inconsistency AI Hallucination occurs when an AI model’s output either:

  1. Fails to adhere to explicit instructions, constraints, or formatting rules provided in the prompt.
  2. Directly contradicts foundational information given to it within the same prompt or conversation.

This type of hallucination isn’t about the AI’s general knowledge being wrong; it’s about its failure to correctly use the specific “rules of the game” or the factual groundwork. Instruction Inconsistencies are one type of AI Hallucination.

  • Chance of Occurrence: Common (especially with complex or multi-step prompts).
  • Consequences: Frustration for users, incorrect task execution, inefficient workflows, and AI acting contrary to its intended purpose (e.g., answering a question instead of translating it).
  • Mitigation Steps: Unambiguous prompt phrasing; use delimiters to separate instructions from content; few-shot prompting with examples; adversarial testing to find and fix instruction-following failures.

Was this article helpful?
0 out of 5 stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
5
Please Share Your Feedback
How Can We Improve This Article?
Drainpipe Agent
Hello! I am the Drainpipe AI Agent. How can I assist you with our platform today?