As an AI practitioner, advisor, author, and speaker, the two words I use the most in my work are “done right.”
Done right is a prerequisite to achieving the holy grail of artificial intelligence (AI), which is to create value at scale in service of being better at the things we care about, like improving the quality of health and medical services or reducing the cognitive burden faced by clinicians and caregivers today.
In reality, most people and organizations start their AI journey by “doing things wrong.” Having said this, it’s important to point out that this is not a criticism. Instead, it’s an acknowledgement that all of us—companies, clinicians, and consumers alike—are at the beginning of a long and hopefully fruitful learning journey.
Think about the process of mastering anything. Do you get things right the first time?
While AI had been smoldering for decades, the conflagration of transformation exploded into everyone’s consciousness in the fall of 2022. Without warning, large language models and generative AI seemingly emerged from nowhere. The velocity of change our new creations are driving surprised everyone, including those working in the field of AI.
We often measure new technology in terms of the time it takes to reach 100 million users. Instagram did it in 2.5 years. TikTok did it in nine months. ChatGPT hit 100 million users just two months after its introduction.
Today’s AI is changing everything, from how we work to how new business models are born. AI is what economists call a general purpose technology. They typically come along once every 100 years. Think electricity. Think internal combustion engines. When they do, they change everything, from how we live and work to the very fabric of how societies operate.
Even though we’re still in the early days of our collective AI journey, too much attention is focused on pointing out where projects have stumbled or failed. Every misstep gets headlines. What’s missing is the recognition that failure, when approached responsibly, is not the end of the story—it’s part of the learning process. Each setback should be seen as a source of insight, a chance to refine our understanding and improve what comes next.
Nowhere is the principle of done right more urgent than in medicine. Healthcare is not a sandbox for experimentation. When AI is applied in clinical settings, mistakes can’t be written off as growing pains—they can have real consequences for patient safety. That’s why every health organization must operate within a responsible AI framework, one that ensures safeguards, oversight, and transparency are built into every use case. In this model, AI doesn’t replace clinicians; it augments their expertise, giving them sharper tools to make better decisions for patients.
When AI is done right, even our failures become fuel for progress. Every lesson learned—every false start, every recalibration—moves us closer to what really matters: creating a system of care that is safer, smarter, and ultimately better for all.
The learning curve of transformation
Every general purpose technology starts with promise and confusion. When electricity was first introduced, factories didn’t immediately redesign their workflows. They simply replaced steam engines with electric motors and expected transformation. When a young Alexander Graham Bell invented the telephone, no one knew what to do with it. Early ideas included using the phone to alert customers that a message had been received at the telegraph office.
AI is at a similar point today. Most organizations are still experimenting with “plug and play” use cases—swapping out human effort with AI in existing processes. But the real transformation comes when we reimagine healthcare itself: not just digitizing forms, but rethinking diagnosis, triage, patient engagement, and clinical decision-making from the ground up.
This reimagination requires learning at the individual, organizational, and societal levels. And, like any journey worth taking, mistakes are an inevitable part of the process.
I am not talking about free-form experimentation and mistakes. Again, the development and use of a responsible AI framework is the first brick to be laid in creating a foundation on which to build an AI platform.
Individual learning: The human side of AI
For clinicians, patients, and caregivers, the first step in the learning journey is simply exposure. Trying AI, experimenting with it in daily work, and discovering both its utility and its limitations.

Clinicians are learning how AI can reduce documentation burden, surface insights from patient records, and act as a tireless assistant—but also where its hallucinations and biases require human oversight.
Patients are discovering new AI-driven tools for self-care, symptom-checking, and personalized recommendations—while also realizing that trust, privacy, and transparency are non-negotiable.
Caregivers are beginning to rely on AI for reminders, monitoring, and coordination, but must learn how to balance these aids with the wisdom, empathy, and intuition that machines cannot provide.
Individually, we are all students of AI now. Just as we once had to learn to use email, smartphones, and electronic health records, we must now learn how to live and work alongside AI—not as a replacement, but as a partner.
Doing good, done right
Done right means more than technical accuracy. It means aligning AI with values, ethics, and the real-world needs of patients and providers.
- Equity: Ensuring AI does not widen disparities in access to healthcare.
- Transparency: Making clear when and how AI is involved in care decisions.
- Trust: Building confidence that AI recommendations are safe, fair, and evidence-based.
- Sustainability: Designing AI systems that reduce burden rather than create new ones.
When these principles are applied, AI can amplify what healthcare does best: caring for people. Done right, AI won’t replace clinicians—it will restore humanity to medicine by giving clinicians back the time and focus to engage with patients.
Our collective learning journey will not be linear. There will be false starts, unintended consequences, and ethical dilemmas. But there will also be breakthroughs—moments when the right combination of human expertise and machine capability leads to better care, faster diagnoses, and healthier communities.
Tom Lawry is a leading AI transformation advisor to health and medical leaders around the world, a top keynote speaker, and the best-selling author of Hacking Healthcare—How AI and the Intelligent Health Revolution Will Reboot an Ailing System. He’s the managing director of Second Century Tech and a former Microsoft exec who served as national director for AI for Health and Life Sciences, director of worldwide health, and director of organizational performance for the company’s first health incubator. Prior to Microsoft, Tom was a senior director at GE Healthcare, the founder of two venture-backed healthcare software companies, and a health system executive. Tom’s work and views have been featured in Forbes, CEO Magazine, Harvard Business Review, CNET, Inside Precision Medicine, and numerous webcasts and podcasts. In a Harris Poll of 2023 JP Morgan Healthcare Conference attendees, Tom was named one of the most recognized leaders driving change and engagement in healthcare today. He has also been named one of the Top 20 AI Voices to Watch.
