Your AI feature should propose, not decide
I helped build a clinical tool that uses AI to suggest implant sizing for joint-replacement surgeries. It's now used on every relevant case at the wards we deployed to. Adoption hit 100% in weeks, surgeons report trusting the plans more not less, and pre-op planning time dropped by about 40%.
This is unusual. Most AI features in healthcare die before they ship. The ones that do ship usually struggle for adoption — clinicians try them once, dismiss the recommendations, and never open the tool again.
Ours didn't, and the reason is a single product decision we made before writing a line of inference code: the AI proposes, the surgeon decides.
This sounds obvious. It is not what most teams build.
The default thing teams build
When you put a competent engineering team in a room with a trained model, the natural product is something like: "the AI predicts the answer, and the user confirms it."
There's a confirm button. You can override. Of course you can. We put the override there explicitly.
The problem is the framing. "Confirm" implies the AI is the primary actor and the human is the safety net. The flow is: model decides, human checks. Even when the override is one click away, the default position is "trust the model unless you have a reason not to."
In a domain where being wrong has consequences — surgery, lending, medical diagnosis, content moderation — that framing fails for two reasons:
- Cognitive load asymmetry. It is mentally cheaper to click confirm than to evaluate whether the model is right. After three confirmations, your brain stops evaluating. After thirty, the override might as well not exist.
- Liability asymmetry. When the surgeon clicks confirm and the implant is wrong, whose fault is it? The hospital's lawyers will ask. So will the surgeon's. Nobody wants to be the human in "human-in-the-loop" if the loop blames you for trusting the loop.
The result: senior clinicians refuse to use the tool at all. Junior ones use it, get burned once, and refuse forever after.
What we built instead
We flipped the framing. The surgeon plans the procedure. The AI shows up alongside their plan as a second opinion: "here's what the model would suggest for this image, with this confidence."
The surgeon's plan is the source of truth. The AI is a colleague offering a view. The surgeon can adopt the suggestion, modify it, or ignore it entirely — and in any of those three cases, they were the one making the decision the whole time.
Concretely:
- The first thing the surgeon sees is their tools, not the model's output. The model's suggestion appears as one panel among several.
- Adopting the suggestion takes one click. Ignoring it takes zero — the surgeon's plan moves forward without them having to actively dismiss anything.
- Every suggestion shows its reasoning: which segmentation regions the model identified, how confident it is, which similar cases in the training set it's drawing on.
- The audit log records the surgeon's plan and the AI's suggestion as two parallel artifacts. If they diverge, the divergence is visible, but neither is treated as "the right answer."
The framing change is small. The behavioural change is enormous. Surgeons stopped feeling supervised by software. They started feeling helped.
Why this matters outside healthcare
Most of the AI features being shipped in 2026 have the same default mistake: treating the model as the primary decision-maker and the human as a checker.
This is fine for low-stakes work. Autocomplete, search ranking, the suggestion of what to watch next — the user doesn't care if the model is "in charge" because the cost of being wrong is small.
It is catastrophic for high-stakes work. If your AI feature is recommending which loan to approve, which content to take down, which line of code to deploy, which patient to admit — the framing matters more than the model.
The right question to ask, before you ship any AI feature, is: who is the principal here? Who carries the consequence of being wrong? If the answer is the human, then the product should make the human feel like the principal. The model is a tool they reach for, not a coworker they have to overrule.
Most products fail this test. They feel, to the user, like the model is in charge and the user is being asked to rubber-stamp it.
What this looks like in code
Three rules we apply:
- The user's input lands first. Whatever the user is doing — composing a plan, writing a draft, deciding a case — that input is rendered, saved, and treated as the source of truth before the model runs. The model output is a layer on top, not a replacement.
- Reject the "confirm" button. Replace it with "adopt suggestion" or "use AI's version." The verb matters. "Confirm" assumes the model is right by default. "Adopt" makes the user the active agent.
- Log both, prefer the human. Every AI-assisted action records both the model's suggestion and the human's final answer. If the two differ, the human's answer is what's used. The log is for learning what the model gets wrong, not for second-guessing the human.
These rules add maybe a day of design work and a few hundred lines of code. They are the difference between adoption and abandonment.
What I'd tell a younger me
You can't fix a bad AI product framing by improving the model. We've watched several products try. They retrain, push accuracy from 92% to 95%, and adoption stays flat — because the user already learned, the first week, that the tool wants them to defer to it. They never come back.
Get the framing right before you start training. The model can be mediocre and the product can succeed. The model can be excellent and the product can fail. The framing is the variable.
Surgeons trusted ours because it never asked them to trust it. It just stood next to them and offered a view, the way a junior colleague does on rounds. That's the bar.
Md. Tausif Hossain leads engineering at DevTechGuru, a Bangladesh-based agency shipping HealthTech, PropTech, and enterprise SaaS products to clients in nine countries. He also runs TechnicalBind, an independent software studio, and teaches advanced full-stack engineering at Ostad. Reach him at tausif.bd or @tausif1337.
Newsletter
Get new posts in your inbox.
Honest essays on engineering, leadership, and the things I’m figuring out. No spam, ever.
