My team is building an agent for complex, multi-language artifacts with interdependent parts. We keep coming back to this hierarchy:
Feedforward: Provide the right context to prevent errors
Tolerance: Be tolerant of output variations to parse intent
Feedback: Give specific, actionable feedback for errors that remain
These are the same design principles Don Norman articulated decades ago for physical products and interfaces. They’re inherent to good system design, whether you’re building a door handle, a nuclear control panel, or an LLM agent.
The order is everything. Each principle builds on the previous one, creating a cascade of prevention that dramatically reduces the work needed at each subsequent level.
Why Feedforward Comes First
Create an environment where correct behavior is inevitable.
Physical constraints are made more effective and useful if they are easy to see and interpret, for then the set of actions is restricted before anything has been done.
Feedforward implements affordances and signifiers - showing what’s possible before action occurs. Toyota calls this poka-yoke: error-proofing through constraint.
Validation context: Telling the LLM about the validation our system will perform on its output
Example: We embed validation rules in context. Don’t wait for the LLM to hit errors: tell it the constraints upfront. This eliminated entire error categories.
Tolerance: Widening the Target
Parse, don’t validate. Transform imperfect input into structured output while preserving intent.
This principle shows up everywhere in good design:
USB-C cables that work in either orientation
Search boxes that accept various date formats
Forms that accept phone numbers with or without dashes
In our LLM context, tolerance looks like:
If the output is `` but the system wanted order.id, we strip the liquid syntax
If we get "true" instead of true, we parse the string to boolean
If timestamps come in various formats, we normalize them
Tolerance recognizes clear intent despite imperfect format. Good design accounts for variation.
On the execution side, provide feedforward information: make the options readily available. On the evaluation side, provide feedback: make the results of each action apparent.
When our agent does hit an error, we ensure:
Exact error messages: Not just “syntax error on input foo.bar[1]” but “syntax error on input foo.bar[1]: value ‘my number’ must be an integer”
Natural response pairing: Error formats that mirror the input structure
Self-healing paths: Give the LLM enough information to correct itself, though we let it discover its own solutions rather than prescribing fixes
Good feedback enables self-correction.
The Compound Effect
What makes this system powerful is how each tier reduces load on the next:
Better feedforward → Fewer errors reach the tolerance layer
Smart tolerance → Fewer errors need feedback
Clear feedback → Faster convergence when iteration is needed
Apply these in order and most remaining errors become genuine edge cases.
The Universal Truth
These aren’t just agent interface principles. They’re universal design principles for any system with unpredictable agents.