From Hope to Evidence in Design
Enterprise design organizations often fall prey to "hope-driven design" – creating products based on what they hope users want rather than building systems that accurately detect and respond to actual user needs.
Design teams in complex technical environments operate with two fundamental challenges:
Confusion between mental models (the map) and user reality (the territory)
The lack of systems to detect when this confusion occurs
This mirrors what Eliezer Yudkowsky describes as the difference between mice and humans: "Mice see, but they don't know they have visual cortexes, so they can't correct for optical illusions." Similarly, design teams without reflective practices cannot correct for their own biases.
Consider what happens when metrics guide decisions:
Hope-driven metrics create products nobody wants. Teams might celebrate "increasing active users by 20%" without recognizing this metric only rose because users were forced to engage with a problematic feature more frequently.
Evidence-driven systems distinguish between user engagement and user value. These systems detect when metrics improve while actual value deteriorates – a classic example of Goodhart's Law, where "when a measure becomes a target, it ceases to be a good measure."
But how can enterprise design organizations identify and correct their own limitations? There are a few ways:
1. Document assumptions alongside decisions
For every significant design decision, document not just what was decided, but the assumptions that led there. When outcomes don't match expectations, examine which assumptions failed rather than simply changing the interface.
2. Implement systematic error correction
Create regular review cycles explicitly designed to find places where the team's thinking diverged from reality. During these sessions, the focus isn't on defending decisions but on identifying patterns of flawed reasoning.
3. Separate hope from probability
Ask team members to distinguish between outcomes they hope for and outcomes they expect. This simple distinction helps technical organizations recognize when emotional investment is clouding judgment.
The organizations that consistently deliver value aren't just those with the most skilled designers or largest budgets – they're those with systematic methods to detect when their thinking has diverged from reality.
As design leaders scale their influence beyond individual features to organizational strategy, this ability to implement "second-order corrections" becomes even more crucial. It's what separates tactical execution from true strategic leadership.
The most powerful design lens isn't just one that sees the user – it's one that can see and correct its own flaws.