# AIP001 — conflicting-instructions
**Category:** clarity **Severity:** warning
## What
Flags pairs of instructions that cannot both be satisfied in a single response.
## Why it matters
Models silently resolve contradictions by picking one side, usually the last-mentioned. Your prompt isn't doing what you think it's doing.
## Example
```
You will only output JSON. Explain your reasoning before answering.
```
The model cannot produce JSON and prose reasoning at the same time.
## Fix
Pick one. If you need structured output with reasoning, put reasoning inside a JSON field: `{ "reasoning": "...", "answer": "..." }`.