Asking tougher questions about the future of AI in healthcare at the 2026 Health Tech Summit.
Last week, Cornell Tech and Weill Cornell Medicine hosted the fourth annual Health Tech Summit. The event brought scientists, entrepreneurs, health system leaders, policymakers, and clinicians together to address how artificial intelligence is reshaping care delivery, clinical workflows, and the business of healthcare.
Last year’s summit explored whether AI belonged in healthcare and how it might enable more humane interactions. This year’s conversations moved decisively toward harder questions: Who captures the value? What obligations come with building in this space? And what actually differentiates a company when the underlying technology is increasingly accessible to everyone? Throughout the event, there was a clear recognition that healthcare deserves the speed that AI makes possible, but that speed must be matched with depth, trust, and a moral obligation to the patients and communities this technology is meant to serve.
The level playing field
AI has dramatically lowered the cost and complexity of building technology products. In a fireside chat, Chris Klomp, Deputy Administrator of CMS and Director of the Center for Medicare, recounted how an engineer recently confessed that his six-person engineering team was committing the same volume of code as a much larger team at a prior company, powered by $400-per-month cloud subscriptions. Across the summit, founders and investors echoed the same point: It has never been easier to build.
The market is now flooded with AI tools offering similar solutions, and when the technology itself is commoditized, the differentiator shifts to depth of understanding: how well product developers know the clinical workflow, the patient’s actual experience, and the regulatory complexity of the vertical they’re operating in. The competitive advantage now belongs to builders who are deeply familiar with healthcare and who leverage insight that no off-the-shelf model can replicate.
AI may be leveling the playing field for builders, but this abundance of opportunity is not necessarily extending to patients. Dr. Mitchell Katz, President and CEO of NYC Health + Hospitals, offered a sobering reminder during the summit’s second day. Historically, technology in healthcare has increased inequity, not reduced it, because expensive innovations create disparities between those who can afford them and those who cannot. AI has the potential to break that pattern. As one moderator put it, the goal is for AI to do for healthcare what smartphones did for connectivity, leveling access regardless of wealth or geography. But accessibility is not automatic. Ultimately, healthcare AI can’t realize its full potential if it does not level the playing field for patients as well.
Trust as the new moat
Across startup and investor panels, speakers agreed that in healthcare AI, trust is not just a nice-to-have — it is the primary competitive moat. Stephanie Sharron, a partner at Morrison Foerster, argued that building trust into AI products from the very beginning of the development process will separate companies that scale from those that stall. She pointed out that foundation model providers’ own self-validation processes remain unreliable, so startups cannot outsource this responsibility. “Relying on the foundation model providers to do the validation for you is not likely to be a winning path.”
The founders on the panel described what this looks like in practice. At Hyro, Israel Krush’s team builds around four principles: compliance that anticipates where regulation is heading, not just where it is today; control through deterministic guardrails for high-stakes moments; clarity about a technology that remains fundamentally a black box; and care for actual patient outcomes rather than efficiency alone. Aniq Rahman, founder of Fabric, emphasized accountability: when something goes wrong with Fabric’s AI, it is Fabric’s responsibility, not the foundation model provider’s. In an environment where anyone can string together something that works, the companies that own their outcomes are the ones that will earn the trust of health systems and patients over time.
The moral contract
Some of the most striking moments of the summit came in the form of personal appeals from healthcare leaders and providers. During Klomp’s fireside chat, he articulated a “social contract” for health tech entrepreneurs. “There are a million different ways you can make money,” he told the audience. “If you choose to make it in this space, you have a higher moral obligation and responsibility. You don’t get to move fast and break things, because when you break things here, you break people’s lives.”
This was not an abstraction. Klomp told the audience about the emails and phone calls he personally fields from Medicare beneficiaries who have exhausted every other avenue, a practice he adopted on the advice of his predecessor, former CMS administrator Andy Slavitt. Those conversations, he said, are what make a $1.1 trillion program serving 68 million Americans feel real. His challenge to the room was direct: Come to CMS, come to the FDA, tell the agencies where they’re getting in the way, but don’t pretend to solve a problem for patients when you’re really just solving your own.
Dr. Dave Chokshi, former New York City Health Commissioner and Founding Director of the CUNY Health and Opportunity Leadership Institute, grounded the point in clinical terms. Reflecting on a formative moment in his training when a senior physician quieted an entire room to focus on a critically ill patient, Chokshi compared that gravity to the current AI moment in health. The stakes demand that kind of attention. He offered a mantra that cut through the summit’s technology-forward energy: Technology has to be subservient to relationships. The most effective implementations, he argued, are not those that optimize processes in isolation, but those that lift up the relationships at the heart of excellent care. The pandemic had already taught this lesson. As Chokshi put it, our greatest problems are not those of intelligence; they are problems of implementation, of trust, of reaching the people who need healthcare most.
The connective tissue
A crosscutting theme throughout the convening was data interoperability — the ability of health records, systems, and tools to seamlessly exchange information. The ambitious visions for healthcare AI, from patient empowerment to proactive clinical decision support to invisible care navigation, all depend on data flowing freely, safely, and under patient control. And that is not yet the reality.
Amy Gleason, Acting Administrator of DOGE and advisor to CMS, leads what may be the most significant current effort to change this. Her CMS Health Tech Ecosystem initiative, launched last summer at the White House, pursues a voluntary collaboration model after two decades of regulatory approaches that work on paper but not in practice. The initiative has convened major technology companies, EMR vendors, health systems, and patient-facing app developers around a shared goal: verified identity credentials, interoperable data networks, and a unified app library where patients can access their complete records with a single login. Gleason noted that she had expected HIPAA reform to be a major priority, but was surprised to find that most of the necessary legal framework already exists in some form. The problem is not a lack of rules but a lack of follow-through. And on that front, CMS is no longer waiting. In his own remarks, Chris Klomp described funding the Office of Inspector General to pursue million-dollar-per-instance penalties for data blocking, signaling that the push for compliance is serious.
Eric Horvitz, Chief Scientific Officer at Microsoft, added a technical dimension to this challenge in his keynote, noting that AI models in healthcare are typically not portable. A model built in one hospital system often sees significant performance drops when deployed in another, in part because the underlying patient populations, documentation practices, and data structures vary widely across institutions. Without progress on data interoperability, even the most capable AI tools remain confined to the environments that trained them.
Photo by Kelly.

