Garrett Hardin published "The Tragedy of the Commons" in 1968. He described a pasture shared by multiple herders. Each herder, acting rationally in their own interest, adds one more animal to the pasture. The individual benefit of each additional animal is positive. The individual cost of the shared resource degradation is distributed across all herders. So each herder keeps adding animals. Every herder makes the rational decision. The collective outcome is the destruction of the pasture that all of them depend on.
This is happening in your EHR right now.
Every physician in a multi-provider independent practice shares a common resource. The EHR database. The patient records. The medication lists. The allergy documentation. The problem lists. Every physician draws from this shared resource to deliver care and contribute back to it through documentation. And every physician, acting rationally under the time pressure of a full clinic day, is making documentation decisions that individually seem inconsequential and collectively degrade the shared resource that all clinical care and all AI tools depend on.
No physician in your practice is the problem. The system that creates the conditions for rational individual behavior to produce collective resource degradation is the problem. And when you deploy AI tools into a system where the commons is already degrading those tools do not just inherit the problem. They amplify it. Garbage in at scale becomes garbage out at scale. The AI works faster. The errors propagate further. The degradation compounds.
What the Tragedy of the Commons Means in a Clinical Setting
The shared resource in a clinical practice is not a pasture. It is the accuracy and completeness of the clinical data environment. Patient medication lists. Active problem lists. Allergy records. Social history. Clinical notes that the next physician to see this patient will rely on to understand their history.
Every physician who signs an ambient AI-generated note in 8 seconds without genuinely reviewing it is adding one more animal to the pasture. The individual time saving is real and immediate. The individual contribution to collective data degradation is diffuse and delayed. The rational choice is to sign quickly. Every physician makes the rational choice. The pasture degrades.
That blending is the tragedy. The physician who generates a note next week reads the AI-generated note from last week as if it were a clinical observation. They build on it. The AI documentation tool reads the AI-generated note as training signal or context. It builds on it. The error or approximation from last week becomes the foundation for this week's documentation. Compound interest on clinical inaccuracy.
The Four Common Resources That Clinical AI Degrades When Governance Is Absent
The tragedy of the commons in clinical AI is not limited to a single resource. There are four shared resources that degrade simultaneously when individual rational behavior operates without collective governance.
How AI Amplifies the Tragedy Instead of Resolving It
The conventional assumption about AI in healthcare is that it improves data quality by automating documentation and reducing the human error that comes from fatigue and time pressure. This assumption is correct under one condition. The AI tool is deployed into a system with governance structures that prevent the tragedy of the commons from operating.
Without those governance structures AI does not improve data quality. It accelerates its degradation.
Here is the mechanism. An ambient AI documentation tool generates notes at scale. Scaling ambient AI exposes a critical limitation. Documentation quality becomes harder to standardize across different clinical contexts. As deployments expand across settings and departments, variations in workflows, terminology, and documentation practices create inconsistencies that accumulate in the shared record.[2]
Each inconsistency is individually small. A medication described at a slightly different dose level than the pharmacy record. A problem list item that was resolved three months ago but remains active in the AI-generated note because the AI did not have access to the discharge summary. A social history that reflects the patient's circumstances from two years ago when the AI model was trained on earlier records.
Each of these inconsistencies is a unit of commons degradation. The AI generates hundreds of notes per day. Hundreds of units of degradation per day. The physician review process that was supposed to catch them is operating at 8 seconds per note because the AI was supposed to reduce documentation burden and instead increased note volume. The governance structure that was supposed to monitor data quality was never built because the AI readiness assessment that approved the deployment never asked whether the commons was already degrading before the tool went live.
Healthcare is entering an era of intelligent data management where real-time validation, data quality scoring, and robust governance are paramount for improved patient outcomes and reduced costs. This integrated approach positions data quality as a frontline responsibility for clinical operations and finance, ultimately enabling better patient outcomes and more dependable AI-driven insights.[3] The practices that understand this in 2026 are not treating data quality as an IT problem. They are treating it as a shared clinical governance responsibility that every provider owns.
The HIPAA Commons. Where Compliance Degradation Follows the Same Pattern.
The tragedy of the commons does not only operate in EHR data quality. It operates with equal force in HIPAA compliance programs.
Consider the shared compliance documentation resource in a 5-provider independent practice. The compliance program exists as a shared asset. Policies that protect every provider. Training records that demonstrate every provider's knowledge. BAAs that cover every provider's vendor relationships. A risk assessment that maps every provider's security posture.
Each provider has individual incentives that work against the shared compliance resource.
The AI Readiness Audit Question That Prevents the Tragedy
Standard AI readiness assessments ask whether the infrastructure is ready, whether the data is clean enough, whether the governance framework is in place. Most health systems have solid BAA coverage for their EHR vendors and established SaaS platforms. But the AI layer on top of those systems is often not covered with the specificity today's environment requires. Who owns the model? Where is training data stored? What happens to PHI in the training pipeline? How are model updates validated before deployment? These questions need documented answers in the vendor relationship, not operating as assumptions.[4]
The tragedy of the commons audit question that most assessments never ask is different.
Is the shared resource healthy enough to absorb this tool without accelerating its own degradation?
That question reframes the entire readiness evaluation. Not is the tool ready to enter this system. Is the system ready to protect its shared resources from the dynamics this tool will create.
The Systems Thinking Resolution. Governance as Commons Management.
Elinor Ostrom won the Nobel Prize in Economics in 2009 for demonstrating that the tragedy of the commons is not inevitable. Communities around the world have successfully managed shared resources for centuries without either privatizing them or subjecting them to government regulation. They did it through collective governance structures that aligned individual incentives with collective outcomes, created monitoring mechanisms, and established graduated consequences for resource degradation.
That is exactly what clinical AI governance needs to be. Not a compliance burden imposed from outside. A collective governance structure that a practice builds for itself that makes rational individual behavior and collective resource health the same thing.
The AI readiness audit is the tool that reveals whether that governance structure exists before a new tool enters the commons. Not after it has already begun degrading what everyone depends on.
The tragedy of the commons in clinical AI is not solved by better physicians or stricter policies. It is solved by governance structures that align individual incentives with collective resource health, name stewards for shared resources, build monitoring mechanisms that detect degradation before it compounds, and redesign the conditions that make rational individual behavior collectively destructive. That is not a technology problem. It is a systems design problem. And it is solvable in every independent practice that is willing to think about AI deployment as a system intervention rather than a tool purchase.
Is Your Practice Ready to Deploy AI Without Triggering the Tragedy?
Our free AI Readiness Scorecard includes a commons health assessment that evaluates the state of your shared data resources, governance structures, and change capacity before recommending any deployment. Know whether your system is ready to absorb a new tool or whether the commons needs attention first.
Want us to run a commons health audit of your practice before your next AI deployment?
Book a free 30-minute discovery call here.
// Sources and References
- NPJ DIGITAL MEDICINE Tracing the Pen: Electronic Health Records Amid the Rise of Generative AI. April 2026. Source for AI and human-generated content blending risk and clinical traceability analysis.
- SCIENCESOFT Q1 2026 Healthcare AI Trends. March 2026. Source for ambient AI documentation quality standardization challenges at scale.
- HEALTHCARE IT NEWS In 2026, Healthcare Data Will Show a Unified View of the Patient. January 2026. Source for intelligent data management era and real-time validation as standard expectation.
- PITECH SOLUTIONS Healthcare AI Compliance in 2026: Beyond HIPAA. April 2026. Source for AI layer BAA gap analysis and model ownership documentation requirements.
- DIGNA AI Healthcare Data Quality Challenges in 2026. February 2026. Source for 20 to 40 percent critical data point gap statistic from JAMIA research.
- HHS ONC ONC Public Comment: AI Governance in Healthcare Settings. February 2026. Source for ongoing monitoring and drift detection requirements and governance instrument framework.
- WOLTERS KLUWER 2026 Healthcare AI Trends: Insights from Experts. December 2025. Source for physician trust and transparency requirements in successful AI deployment.