Healthcare is deploying AI on a foundation it knows is incomplete — and the patient is sitting in the gap between the ambition and the infrastructure it requires. This analysis examines what the program data, the market dynamics, and the language of transformation are actually revealing about who owns the risk and who owns the data.
Healthcare's AI ambition and its data infrastructure are moving at different speeds. In this edition of Lens Four, Sean Martin examines what happens when those speeds collide — and who is accountable when the sequence is wrong.
🔍 In this episode:
Fourth Lens: Healthcare's AI ambition and its data infrastructure are moving at different speeds — and the patient is where those speeds collide. The program layer is making sequence choices. The market layer is accelerating pressure. The messaging layer is optimizing for ambition. None of it is an argument against innovation. All of it is an argument for discipline — A-to-Z, every dependency, ambiguity, and fragility along the way.
🎙️ Podcast conversations referenced in this article:
🔗 Full article and references: seanmartin.com/lens-four
🌐 HIMSS26 coverage: itspmagazine.com
Sean Martin is a cybersecurity market analyst, content strategist, and advisor with 30+ years across engineering, product development, marketing, and media. Co-founder of ITSPmagazine and Studio C60, host of the Redefining CyberSecurity Podcast and the Music Evolves Podcast. Connect at seanmartin.com.
Subscribe to Lens Four — Where business, innovation, and messaging come into focus.
🎯 Keywords: healthcare AI governance, order of operations AI, data foundation healthcare, vendor trust gap, patient data ownership, TEFCA, health information exchange, QHINs, Shadow AI healthcare, third-party risk management, supply chain resilience healthcare, Zero Trust healthcare, CMS interoperability framework, CIA triad healthcare, data integrity AI, identity management healthcare, HITRUST, Jason Kor, Ryan Patrick, Wolters Kluwer, Digital Medicine Society, DiMe, Google for Health, Jon McNeill, John Halamka, Mayo Clinic Platform, Sumbul Ahmad Desai, Apple Health, Daymond John, Dr. Mehmet Oz, Amy Gleason, Kim Brandt, DOGE healthcare, Stryker cyberattack, nation-state healthcare attack, HIMSS26, Redefining CyberSecurity Podcast, Lens Four, Sean Martin, ITSPmagazine
[00:00:00] Order of Operations, The Foundation Risk Healthcare AI Is Running Past
By Sean Martin | Lens Four | Read by TAPE9
Healthcare organizations are approving and deploying AI at a pace that assumes the foundational work is complete. In most cases it is not. The identity layer is imperfect. Vendor integrations are running AI capabilities that procurement never evaluated , and in some cases, vendors are using patient data flowing through their platforms to build capabilities the health system never authorized. The policy agenda is raising the stakes faster than the infrastructure can absorb them. When those conditions converge in a clinical environment, the failure mode is not a system error. It is a wrong output, delivered with confidence, to a provider making a patient decision. This analysis looks at the program conditions, the market dynamics, and the language patterns that are [00:01:00] allowing healthcare to call an incomplete foundation a transformation , and what responsible sequencing actually requires.
I look at cybersecurity and technology through three lenses , how organizations are running their programs and connecting security to real business outcomes, where market innovation is changing what's possible, and how the language around technology shapes what gets funded and what gets deferred. Right now, all three lenses are landing on the same problem in healthcare , and the fact that they're landing together is the signal worth paying attention to.
The program lens shows organizations approving AI deployments on foundations that aren't ready, identity layers with known gaps, vendor integrations running capabilities that procurement never evaluated, supply chain dependencies that haven't been stress-tested. The market lens shows vendors, investors, and a one-point-seven trillion dollar federal policy agenda all accelerating the pressure to deploy faster. And the [00:02:00] messaging lens shows a vocabulary , "transformation," "scale," "pilot to production" , that is doing more work to describe ambition than it is to describe sequence. When the language of readiness and the language of ambition stop meaning the same thing, the gap between them is where the risk lives. In healthcare, the patient is sitting in that gap , sometimes willingly, having chosen the wearable, the app, the proactive care model. Sometimes with no choice at all, simply receiving care in a system where AI is already making decisions about them. Either way, they're in the rapid. The question is whether anyone upstream has checked the water.
LENS ONE , BUSINESS PROGRAMS
What is the actual state of the foundation healthcare AI is being built on?
The program data says most organizations are deploying AI ahead of the readiness their own frameworks would require.
There is a framework for this. Jon [00:03:00] McNeill , who scaled Tesla and Lyft before turning it into a repeatable methodology , calls it "the algorithm", question every requirement, remove unnecessary steps, simplify broken workflows, and then apply technology as the accelerant for the repaired process. John Halamka's work at Mayo Clinic Platform is the healthcare proof of concept , platform-based AI produces reliable clinical insights when the data underneath it is governed, consistent, and trustworthy. The framework is well understood. The sequence is not being followed at scale.
A survey of two thousand and forty-one healthcare leaders across ninety countries conducted by the Digital Medicine Society and Google for Health found that eighty-seven percent of executives cited lack of guidelines as a moderately or very important barrier to AI adoption, and eighty-eight percent cited resource allocation. A separate Digital Medicine Society analysis found that eighty-two percent of more than two hundred and thirty health systems have [00:04:00] limited or no governance processes for AI in place. These are not organizations that haven't heard the argument for AI governance. They are organizations that have heard it and haven't yet built it , while the deployments proceed.
The workforce data makes the governance gap visible at the clinical level. A Wolters Kluwer survey of five hundred and eighteen healthcare providers and administrators found that fifty-eight percent of frontline staff had used unsanctioned AI tools for work at least monthly, with the primary drivers being faster workflows and the absence of any approved alternative. That is not a workforce compliance failure. It is a program design failure. Organizations that have announced AI strategies have not built the governed AI infrastructure their clinical staff needs to do the work. The staff found their own solution. The organization retained the liability.
The vendor trust gap is the harder version of the same problem , harder because it [00:05:00] arrives through a channel organizations already trust. Trusted vendors are adding AI capabilities to products already deployed inside health systems, after contracts are signed, after integrations are built, after due diligence has closed. As Jason Kor of HITRUST described in a conversation recorded for the Redefining CyberSecurity Podcast, most procurement processes aren't built to close this gap , and most health systems have no mechanism to detect when it happens. In a general enterprise context, an unevaluated feature is a risk management problem. In a clinical context, where that AI is helping a provider determine a treatment path, it is a patient safety problem.
The supply chain failure mode arrived in concrete form when the Stryker attack became public , a nation-state operation that created a live disruption for hospitals depending on Stryker products and services to function. The hospitals were not breached. Their supplier was. As Ryan Patrick of [00:06:00] HITRUST analyzed in a post-incident conversation recorded for the Redefining CyberSecurity Podcast, third-party-related breaches have doubled in the last twelve months, and availability of services has moved into the same risk tier as confidentiality of data. That shift matters specifically in the AI context, a system operating on corrupted, incomplete, or unavailable data does not produce a visible error. It produces a confident wrong answer. The CIA triad , confidentiality, integrity, availability , exists precisely because all three pillars matter. Healthcare's AI programs are being designed as if only one of them does.
BETWEEN THE LENSES
Who owns the data the AI is running on?
Every stakeholder in healthcare has a claim on the patient's data. The patient is rarely the one who controls it.
The provider collected it. The insurer paid for the encounter that generated it. The vendor's platform stored it. The device manufacturer's hardware captured [00:07:00] it. The government program funded the care. And the patient , whose body produced all of it , typically has the least visibility into where it goes and what it's used for.
In practice, ownership is asserted by whoever controls access. That is rarely the patient.
Vendors are not passive custodians of that data. The platforms running inside health systems are learning from the data flowing through them , using provider workflows, patient interactions, and claims patterns to train models, refine algorithms, and build capabilities that become competitive advantages. That can create genuine value, better inference, smarter defaults, more accurate clinical decision support. But it is happening largely without explicit authorization from the health system, without visibility to the patient, and without an audit trail that would tell anyone what the data is actually being used to build. As the HIPAA Journal has documented, the arrival of AI in trusted vendor products often comes via [00:08:00] notification , a letter or email explaining that AI will now be part of the service , without a meaningful mechanism for the health system to evaluate what that means for its patients' data or its own liability.
TEFCA , the Trusted Exchange Framework and Common Agreement, now operational with eleven Qualified Health Information Networks serving as the national exchange backbone , defines six permitted exchange purposes, treatment, payment, healthcare operations, public health, government benefits determination, and individual access. What it does not define is who owns the data once a vendor operating as a Qualified Health Information Network participant receives it, processes it, and builds on top of it. The interoperability agenda moves data across systems. It does not move the ownership rights with it. The vendor that adds AI to an integrated product after the contract is signed is making a decision about data use that the health system never [00:09:00] authorized and the patient never knew was on the table.
The value and the risk are running in the same data flows. The accountability structure has not caught up to either one.
LENS TWO , INNOVATION AND MARKET SHIFTS
What is the policy agenda requiring , and is the infrastructure positioned to absorb it?
The CMS agenda is directionally correct and technically demanding. The data infrastructure it requires is still mid-build.
The Centers for Medicare and Medicaid Services has put a one-point-seven trillion dollar, one hundred and sixty million American policy agenda on the table, the CMS Interoperability Framework to break down data silos, AI-powered fraud and waste elimination, and a patient-provider partnership model built on unprecedented data access and transparent pricing. CMS Administrator Doctor Mehmet Oz, alongside Amy Gleason of the U.S. DOGE [00:10:00] Service and Kim Brandt, CMS Deputy Administrator and COO, have made the direction clear. What the agenda requires technically is work that most of the sector is still mid-stream on.
Fraud detection at CMS scale requires claims data that is accurate. TEFCA is moving data across systems at a scale that would have been technically impossible five years ago. What it does not do is repair the identity errors embedded in that data before it starts moving. A record with a mismatched patient identifier does not become accurate because it now travels faster and farther. Patient data access at the scale CMS is describing requires accurate patient matching across every system that holds a record , which is precisely the identity problem health IT has been managing imperfectly for two decades. The policy is writing a check. The infrastructure is still mid-build on the account it is drawing from.
The implementation picture is more fractured than the federal agenda [00:11:00] implies. A cross-government analysis of digital health transformation across federal, state, and tribal systems , including CMS, the VA's Office of Information and Technology, and the Indian Health Service , makes the coordination problem visible, modernization is underway at every level, but it is happening in parallel, not in partnership. The communities where the coordination gap is widest , rural, tribal, underserved , are the same communities where infrastructure deficits are deepest and the consequences of data errors are most immediate. The policy agenda reaches them last. The risks reach them first.
Sumbul Ahmad Desai at Apple articulated the consumer health version of the same argument, wearables are enabling a genuine shift from reactive to proactive care models, with patients owning their health data and feeding it into personalized care plans and clinical research. Every part of that model , the AI [00:12:00] inference, the clinical integration, the personalized care pathway , is downstream of the identity and data integrity layer. More data moving faster into a poorly governed infrastructure does not improve patient outcomes. It amplifies the underlying problem with a more capable interface and a faster clock.
Identity is the load-bearing wall. Everything built on top of it inherits whatever errors are embedded in it. That is not an infrastructure opinion. It is a program risk calculation.
LENS THREE , LANGUAGE, MESSAGING AND MARKET NARRATIVE
How is the market narrating this , and what is the framing leaving out?
The language of AI transformation is doing real work. Some of it is covering the sequencing problem it should be naming.
Healthcare's AI conversation has a vocabulary problem. "Transformation" implies a completed state. "Deployment" implies the hard [00:13:00] work is behind the organization rather than in front of it. "Pilot to production" frames the move to scale as an achievement rather than a risk event. The investor community is hearing a version of the market that emphasizes capital efficiency, proof of value, and speed to scale , the venture logic of moving fast. That logic runs directly into an operational reality where health systems are simultaneously trying to modernize legacy infrastructure, close identity gaps, govern AI for the first time, and resolve data ownership questions that have been deferred for years. The language is not lying. It is selecting. And what it is not selecting for is the sequence.
The Zero Trust conversation in healthcare is one place where the language is starting to catch up to the risk. Security practitioners who have been framing Zero Trust Architecture and identity-based access controls as ransomware defenses are now framing them as the infrastructure conditions that make trustworthy AI deployment [00:14:00] possible in the first place. That reframe is significant. It connects the security program to the AI program in a way that makes both more defensible , and makes accountability clearer. If the identity layer is ungoverned, the AI program built on top of it is ungoverned. The CISO and the CIO share that exposure with the business leader who approved the deployment timeline.
The vendor trust gap requires the most scrutiny at the market level. Vendors have significant commercial incentive to describe their AI capabilities in terms of what is possible rather than what is governed. The quiet addition of AI features to integrated products is partly a product velocity decision and partly a market narrative decision, if the feature is already deployed, the conversation shifts from "should we evaluate this?" to "how do we govern what's already running?" That is a different conversation with a different power dynamic. Health system leadership approved the original vendor [00:15:00] relationship. They are accountable for what that relationship is now delivering into their clinical environment , whether they were told about it or not. And if a patient outcome suffers because of a capability that was never evaluated, the accountability chain does not end at the vendor's door. It starts there and runs back through every decision that allowed the deployment to happen without scrutiny.
THE FOURTH LENS
Healthcare's AI ambition and its data infrastructure are moving at different speeds , and the patient is where those speeds collide.
Here is the pattern, viewed across all three lenses at once.
The program layer shows organizations approving AI on foundations they know are incomplete , identity gaps unresolved, vendor integrations unevaluated, supply chain dependencies untested, data ownership questions deferred. The decision to proceed is not ignorance. It is a sequence choice, move the AI forward and address the foundation in parallel. [00:16:00] Any program manager who has shipped something complex knows what happens when you run critical-path items in parallel that should be sequential. The schedule looks faster. The risk does not go away. It defers , and it compounds.
The market layer shows vendors, investors, and the policy agenda all accelerating the pressure to deploy. The vendor that quietly ships AI into a trusted product is making a sequence choice too , ship first, seek approval if asked. The CMS agenda sets a policy clock that does not wait for the identity infrastructure to catch up. TEFCA is expanding the reach of that clock, moving data nationally at speed while the governance layer is still being assembled. The investor asking whether the product is differentiated is not asking whether the foundation is ready. These are not the same question. They are rarely in the same conversation.
The messaging layer shows a vocabulary that has optimized for ambition [00:17:00] and is underweighted on sequence. "Transformation" and "scale" are the words doing the work. "Prerequisites," "dependencies," and "order of operations" are not in the same sentence as the AI roadmap announcement. When the vocabulary of ambition is doing more work than the vocabulary of readiness, the accountability structure gets blurry , and in healthcare, blurry accountability has a patient sitting in the middle of it.
None of this is an argument against ambition. Innovation in healthcare is not optional , the status quo has its own costs, and the potential of AI to improve outcomes, reduce burden, and reach underserved populations is real and worth pursuing hard. The argument is about discipline. A-to-Z is not just a start and an end. It is every dependency in between. Every ambiguity that needs a decision before it becomes a failure. Every fragility that looks like a detail until it becomes the [00:18:00] reason the whole thing stops. The organizations that will get this right are the ones that can hold the ambition and the sequence at the same time , and that require their vendors to hold it too.
Healthcare AI is a complex program running on an incomplete foundation, with contested data ownership, a national exchange infrastructure still finding its governance footing, accelerating external pressure, and a patient sitting in the middle of it. Some of those patients chose to be there , the wearable, the app, the engaged care model. Others are simply in the system, receiving care, with no awareness that the rapid has already begun. Either way, the question is the same, does anyone upstream know what's in the water , and who is accountable if they didn't check?
This edition of Lens Four, and the voice reading it, represent the results of an interactive collaboration between Human Cognition and Artificial Intelligence.
If this analysis is useful to you, the full article with all [00:19:00] references, data points, and links to every podcast conversation mentioned is at seanmartin.com. Search for Lens Four, or find me directly at seanmartin.com.
Thanks for listening. Explore more at seanmartin.com.