Goodhart’s Law
By Juan Carlos
Definition
Goodhart’s Law states that when a measure becomes a target, it ceases to be a good measure. This phenomenon occurs because people optimize for the metric rather than the underlying quality or outcome it was designed to track.
Why Use It
Understanding Goodhart’s Law transforms our approach to goal-setting and performance evaluation. This framework helps explain why well-intentioned metrics often lead to unexpected and counterproductive behaviors, providing crucial insights for designing more effective measurement systems. It serves as a powerful tool for anyone involved in creating incentive structures or evaluation systems.
When to Use It
In a metric-driven world, Goodhart’s Law becomes relevant whenever we’re:
- Designing performance metrics
- Setting organizational goals
- Creating educational standards
- Developing incentive systems
- Evaluating success measures
- Implementing quality controls
- Building reward structures
How to Use It
Andrew Niccol’s “Gattaca” illustrates this concept by depicting a society obsessed with genetic metrics. Like Vincent’s triumph over a system that equates genetic measures with human worth, we see how focusing too narrowly on specific metrics can miss the broader picture of human potential. Understanding this helps us:
- Design holistic measurement systems
- Create balanced incentive structures
- Monitor for gaming behaviors
- Implement multiple complementary metrics
- Focus on underlying objectives
- Maintain flexibility in evaluation
How to Misuse It
Goodhart’s Law isn’t an excuse to abandon measurement entirely. Like any principle about human behavior, it requires thoughtful application.
Common pitfalls to avoid:
- Using it to justify a lack of measurement
- Overcorrecting by removing all metrics
- Creating unnecessarily complex systems
- Ignoring valuable quantitative data
- Assuming all gaming is harmful
- Replacing bad metrics with worse ones
Next Steps
Implementing awareness of Goodhart’s Law requires strategic thinking and continuous monitoring. Think of it as developing a more sophisticated measurement ecosystem:
- Audit current metrics and their effects
- Identify potential gaming behaviors
- Design complementary measures
- Create balanced scorecards
- Monitor unintended consequences
- Adjust systems based on outcomes
Where it Came From
Charles Goodhart, a British economist, first articulated this principle in 1975 while discussing monetary policy at the Reserve Bank of Australia. Initially focused on economic indicators, the law has since been widely recognized as applicable to any field where human behavior responds to measurement. Marilyn Strathern later popularized a generalized version: “When a measure becomes a target, it ceases to be a good measure,” extending its relevance beyond economics to social policy, education, and organizational behavior.