Home/ Insights/ AI Transformation
AI TRANSFORMATION SYSTEMS THINKING April 28, 2026 · 13 min read

The Tragedy of the Commons Is Quietly Destroying Your EHR Data. Your AI Is Making It Worse.

Every physician in your practice is behaving rationally. No single physician is doing anything wrong. And together they are quietly degrading the shared data resource that every AI tool in your clinic depends on for accurate outputs. This is the tragedy of the commons playing out in clinical AI right now. And most AI readiness assessments never ask whether it is happening in your practice before recommending deployment.

E
Elevare Health AI Inc.
HIT & AI Transformation Consulting, Cedar Falls, Iowa

Garrett Hardin published "The Tragedy of the Commons" in 1968. He described a pasture shared by multiple herders. Each herder, acting rationally in their own interest, adds one more animal to the pasture. The individual benefit of each additional animal is positive. The individual cost of the shared resource degradation is distributed across all herders. So each herder keeps adding animals. Every herder makes the rational decision. The collective outcome is the destruction of the pasture that all of them depend on.

This is happening in your EHR right now.

Every physician in a multi-provider independent practice shares a common resource. The EHR database. The patient records. The medication lists. The allergy documentation. The problem lists. Every physician draws from this shared resource to deliver care and contribute back to it through documentation. And every physician, acting rationally under the time pressure of a full clinic day, is making documentation decisions that individually seem inconsequential and collectively degrade the shared resource that all clinical care and all AI tools depend on.

// THE CORE ARGUMENT

No physician in your practice is the problem. The system that creates the conditions for rational individual behavior to produce collective resource degradation is the problem. And when you deploy AI tools into a system where the commons is already degrading those tools do not just inherit the problem. They amplify it. Garbage in at scale becomes garbage out at scale. The AI works faster. The errors propagate further. The degradation compounds.

40%
Of critical data points are missing in typical EHR records according to JAMIA research on clinical decision support barriers
27%
Increase in time spent with patients when ambient AI documentation works correctly. Falls to near zero when data quality degrades.
2026
The year healthcare enters intelligent data management where real-time validation and data quality scoring become essential

What the Tragedy of the Commons Means in a Clinical Setting

The shared resource in a clinical practice is not a pasture. It is the accuracy and completeness of the clinical data environment. Patient medication lists. Active problem lists. Allergy records. Social history. Clinical notes that the next physician to see this patient will rely on to understand their history.

Every physician who signs an ambient AI-generated note in 8 seconds without genuinely reviewing it is adding one more animal to the pasture. The individual time saving is real and immediate. The individual contribution to collective data degradation is diffuse and delayed. The rational choice is to sign quickly. Every physician makes the rational choice. The pasture degrades.

Research published in npj Digital Medicine in 2026 identifies a critical risk emerging from ambient AI documentation. The integration of AI-generated content in clinical workflows introduces the risk of blending AI and human-generated content within EHRs in ways that undermine the traceability of clinical information. Without clear provenance for each piece of clinical data, the shared record becomes a mixture of verified clinical observations and AI-generated approximations that no subsequent clinician can reliably distinguish.[1]

That blending is the tragedy. The physician who generates a note next week reads the AI-generated note from last week as if it were a clinical observation. They build on it. The AI documentation tool reads the AI-generated note as training signal or context. It builds on it. The error or approximation from last week becomes the foundation for this week's documentation. Compound interest on clinical inaccuracy.

The Four Common Resources That Clinical AI Degrades When Governance Is Absent

The tragedy of the commons in clinical AI is not limited to a single resource. There are four shared resources that degrade simultaneously when individual rational behavior operates without collective governance.

// COMMONS 1
EHR Data Quality
The accuracy and completeness of the shared patient record. Every physician who approves an AI note without genuine review contributes one unit of degradation to this commons. The degradation is invisible in the short term and catastrophic in the long term when AI tools trained on or informed by the record begin propagating its inaccuracies at scale.
// COMMONS 2
Physician Trust in AI
Trust is a shared resource in a multi-provider practice. When one physician has a bad experience with an AI output and shares it with colleagues, the trust commons depletes for everyone. When the practice has no mechanism for logging and resolving AI errors, the trust commons depletes without any corresponding investment to replenish it. Low trust produces inconsistent adoption. Inconsistent adoption produces variable data quality. Variable data quality produces more bad outputs.
// COMMONS 3
Compliance Documentation Integrity
HIPAA compliance documentation is a shared resource. Every staff member who takes a shortcut under time pressure, every training record that is marked complete without genuine engagement, every BAA that expires unnoticed, degrades the shared compliance posture of the entire practice. No individual staff member caused the compliance gap. The collective behavior of every individual acting rationally under pressure produced it.
// COMMONS 4
Organizational Change Capacity
The ability of a practice to absorb and adapt to new technology is a finite shared resource. Every AI tool deployment that fails or underperforms draws from this resource. Every failed deployment makes the next deployment harder because the trust and goodwill required for change have been depleted. Organizations that deploy too many tools too quickly exhaust their change capacity commons and find themselves unable to adopt even well-designed tools because the organizational appetite for change has been consumed by previous disappointments.

How AI Amplifies the Tragedy Instead of Resolving It

The conventional assumption about AI in healthcare is that it improves data quality by automating documentation and reducing the human error that comes from fatigue and time pressure. This assumption is correct under one condition. The AI tool is deployed into a system with governance structures that prevent the tragedy of the commons from operating.

Without those governance structures AI does not improve data quality. It accelerates its degradation.

Here is the mechanism. An ambient AI documentation tool generates notes at scale. Scaling ambient AI exposes a critical limitation. Documentation quality becomes harder to standardize across different clinical contexts. As deployments expand across settings and departments, variations in workflows, terminology, and documentation practices create inconsistencies that accumulate in the shared record.[2]

Each inconsistency is individually small. A medication described at a slightly different dose level than the pharmacy record. A problem list item that was resolved three months ago but remains active in the AI-generated note because the AI did not have access to the discharge summary. A social history that reflects the patient's circumstances from two years ago when the AI model was trained on earlier records.

Each of these inconsistencies is a unit of commons degradation. The AI generates hundreds of notes per day. Hundreds of units of degradation per day. The physician review process that was supposed to catch them is operating at 8 seconds per note because the AI was supposed to reduce documentation burden and instead increased note volume. The governance structure that was supposed to monitor data quality was never built because the AI readiness assessment that approved the deployment never asked whether the commons was already degrading before the tool went live.

The HIPAA Commons. Where Compliance Degradation Follows the Same Pattern.

The tragedy of the commons does not only operate in EHR data quality. It operates with equal force in HIPAA compliance programs.

Consider the shared compliance documentation resource in a 5-provider independent practice. The compliance program exists as a shared asset. Policies that protect every provider. Training records that demonstrate every provider's knowledge. BAAs that cover every provider's vendor relationships. A risk assessment that maps every provider's security posture.

Each provider has individual incentives that work against the shared compliance resource.

// THE HIPAA COMMONS DEGRADATION PATTERN
📋
Policies
Each provider individually has no incentive to notice that the policies reference a system replaced 18 months ago. The policy document works for today's encounter regardless of whether it accurately reflects current systems. The individual rational behavior is to not raise the issue. The collective outcome is a policy document that no longer represents the practice's actual compliance posture.
🤝
BAA Register
Each provider who adopts a new AI tool without routing it through the compliance program is adding one more animal to the pasture. The individual benefit of not going through the procurement process is immediate. The collective cost of an uncovered vendor relationship is distributed across all providers and delayed until an audit reveals it. The rational individual choice produces a collective compliance gap.
🎓
Training Records
Each staff member who completes a training module by clicking through it without genuine engagement preserves their individual compliance record while contributing nothing to the shared knowledge commons. The training record shows completion. The knowledge commons is unchanged. When an incident occurs the shared resource fails despite the documentation showing it was maintained.
🔍
Security Risk Assessment
The SRA is the most obvious commons resource in a compliance program. It is reviewed periodically, updated sporadically, and owned by nobody specifically. Every individual in the practice benefits from having a current SRA. No individual has a strong enough individual incentive to take ownership of keeping it current. The collective outcome is the 3-year-old risk assessment that underlies every other compliance gap we have described in this series.

The AI Readiness Audit Question That Prevents the Tragedy

Standard AI readiness assessments ask whether the infrastructure is ready, whether the data is clean enough, whether the governance framework is in place. Most health systems have solid BAA coverage for their EHR vendors and established SaaS platforms. But the AI layer on top of those systems is often not covered with the specificity today's environment requires. Who owns the model? Where is training data stored? What happens to PHI in the training pipeline? How are model updates validated before deployment? These questions need documented answers in the vendor relationship, not operating as assumptions.[4]

The tragedy of the commons audit question that most assessments never ask is different.

Is the shared resource healthy enough to absorb this tool without accelerating its own degradation?

That question reframes the entire readiness evaluation. Not is the tool ready to enter this system. Is the system ready to protect its shared resources from the dynamics this tool will create.

2
Identify every shared resource the tool will affect and assign a named steward to each
The tragedy of the commons operates when shared resources have no named steward. Name one person responsible for EHR data quality. Name one person responsible for AI note accuracy monitoring. Name one person responsible for vendor BAA currency. These do not need to be full-time roles. They need to be named responsibilities with defined tasks on defined schedules. The commons does not degrade when someone is accountable for its health.
3
Redesign individual incentives to align with collective resource health
The tragedy operates when individual incentives are misaligned with collective outcomes. Redesign the incentives. Note quality metrics that affect physician performance reviews align individual incentives with EHR data quality. Compliance completion metrics that measure knowledge retention not just click-through rates align individual incentives with training commons health. Systems thinking resolves the tragedy not by blaming individuals but by redesigning the incentive structure that makes rational individual behavior collectively destructive.
4
Build a commons health monitoring system before go-live
Without ongoing monitoring and drift detection, degradation may go unnoticed, which undermines trust amongst clinicians and increases institutional risk. Governance instruments including documentation, evaluation, audit, and oversight must travel with algorithms into real clinical contexts.[6] A monthly data quality review that samples 20 notes per provider and flags accuracy concerns is a commons health monitoring system. It costs 30 minutes per month. It prevents the compounding degradation that costs vastly more to repair than to prevent.
5
Assess the change capacity commons before adding any new tool
Ask honestly. How many AI tools or significant technology changes has this practice absorbed in the last 12 months? What is the current level of staff and physician enthusiasm for new technology? Has the last deployment fully stabilized or is it still consuming change capacity? Healthcare providers have told industry experts repeatedly that they want solutions that scale across existing workflows and demonstrate value beyond speed. Trust and transparency are the currencies of successful deployment.[7] Both require change capacity to build. If the commons is depleted a new deployment will not build trust. It will consume what remains.

The Systems Thinking Resolution. Governance as Commons Management.

Elinor Ostrom won the Nobel Prize in Economics in 2009 for demonstrating that the tragedy of the commons is not inevitable. Communities around the world have successfully managed shared resources for centuries without either privatizing them or subjecting them to government regulation. They did it through collective governance structures that aligned individual incentives with collective outcomes, created monitoring mechanisms, and established graduated consequences for resource degradation.

That is exactly what clinical AI governance needs to be. Not a compliance burden imposed from outside. A collective governance structure that a practice builds for itself that makes rational individual behavior and collective resource health the same thing.

The AI readiness audit is the tool that reveals whether that governance structure exists before a new tool enters the commons. Not after it has already begun degrading what everyone depends on.

// THE SYSTEMS THINKING RESOLUTION

The tragedy of the commons in clinical AI is not solved by better physicians or stricter policies. It is solved by governance structures that align individual incentives with collective resource health, name stewards for shared resources, build monitoring mechanisms that detect degradation before it compounds, and redesign the conditions that make rational individual behavior collectively destructive. That is not a technology problem. It is a systems design problem. And it is solvable in every independent practice that is willing to think about AI deployment as a system intervention rather than a tool purchase.

Is Your Practice Ready to Deploy AI Without Triggering the Tragedy?

Our free AI Readiness Scorecard includes a commons health assessment that evaluates the state of your shared data resources, governance structures, and change capacity before recommending any deployment. Know whether your system is ready to absorb a new tool or whether the commons needs attention first.

Want us to run a commons health audit of your practice before your next AI deployment?
Book a free 30-minute discovery call here.

// Sources and References