For individuals For organizations Our approach Blog Get started
NYMBLE BLOG

To Err is Healthcare, to Continuously Improve is Divine

Safety Incident Reviews, Continuous Improvement, and Why AI Fits Naturally Into the Frameworks We Already Have

Jeremy Hessing-Lewis

April 8, 2026 • Jeremy Hessing-Lewis

← Back to all posts
Share
Illustration of a computer buckled into a car passenger seat, holding papers labelled AI Logic and Safety Protocols

There is not yet a settled standard of care for AI in healthcare. In Canada, regulation is a patchwork — medical device rules, provincial guidance, privacy law — and the uncertainty is real. This need not paralyze our efforts to improve patient care.

AI tools are not a departure from our existing safety infrastructure. They slot directly into the clinical safety incident frameworks and quality improvement (QI) programs that healthcare organizations have spent decades building.

This post examines how that integration works in practice, supporting the dramatic acceleration of incident investigation, analysis, and remediation. I focus not on whether existing models achieve due care, but how they rapidly iterate towards better care.

Mistakes Happen, And Can be Caught

While the debate continues over an established standard of care for AI in healthcare, one constant remains: mistakes are inevitable, whether they originate from a human practitioner, an AI-powered tool, or the complex interplay of roles within the circle of care. Just as human error is addressed through QI frameworks, machine learning models, particularly Large Language Models (LLMs), have their own predictable failure modes that must be factored into continuous improvement programs. They can hallucinate — generating confident-sounding answers that are factually wrong. And they can misread nuance, missing context that an experienced clinician would catch. While LLMs continue to improve and hallucinations trend down, any platform operating at scale still requires sophisticated guardrails, both automated and human. Within nymble, this is the rationale behind our expert-in-the-loop safety system and clinical guardrail protocols.

AI Fits Naturally Into Safety Frameworks

Clinical safety incident review is a mature field. There are generally two types of incident reviews: a) systems reviews relating to the quality of care; and b) accountability reviews, directed at individual performance. For quality of care investigations, the goal isn’t to assign blame — it’s to understand what went wrong, why, and how to prevent it from happening again. That culture of learning maps directly onto how AI tools should be governed.

What makes AI particularly well-suited to this work is the evidence trail it leaves behind. Unlike human documentation, AI interactions generate structured, auditable records by default — consistent logs, cross-patient comparisons, and scalable data sets that quality teams can actually work with. In a traditional care setting, reconstructing what happened in a missed diagnosis might require pulling paper charts from three providers. In a structured AI-assisted interaction, the same reconstruction takes minutes – with every message, every flag, and every clinical decision point already timestamped and searchable.

Faster Incident-to-Action Cycles

Canada Blood Services is a model of safety culture, rebuilt after thousands of Canadians were harmed by tainted blood in the 1980s and 90s. The Krever Commission produced lasting reform — but it took years, while people were still being harmed. How do we get there faster?

We now have two major opportunities with AI in healthcare:

  1. Earlier detection. Automated guardrails can flag potential safety issues in near real-time, shifting the posture from reactive to preventive. With humans providing in-the-loop oversight rather than routine administration, they can also flag anomalous activity.
  2. Faster remediation. The same infrastructure that generates an incident generates the record for investigating it. What once took months of review can compress into days.

Achieving this requires a risk-based evaluation of where AI fits in our existing safety culture — not as a disruptive force that demands entirely new governance structures, but as a tool that works within them. The workflows, the reporting obligations, the continuous improvement culture: all of it carries over. What changes is the speed at which we can move from incident to insight to action. For healthcare organizations serious about patient safety, that is not a reason for delay. It is a reason to run.

Safety @ nymble

Nymble isn’t a medical device, but user safety drives everything we do. Safety incidents can surface through user reports, automated guardrails, or monitoring by our Clinical Safety Team — led by our Chief Medical Officer. As Data & Trust Officer, I lead our incident management program as well as development of our quality management system.

When something flags, it’s triaged in minutes. Our team will often intervene directly within a user’s session to address a concern at the source, and adjust our protocols and models accordingly to avoid future incidents. Most safety programs ask “was this safe enough?” We ask whether we’d be comfortable explaining every decision to the patient sitting in front of us. The real opportunity in front of healthcare today is to set a higher bar.


Jeremy Hessing-Lewis is Data & Trust Officer at nymble health. He leads nymble’s incident management program, quality management system, and data governance.