
Hey everyone,
Let me walk you through something that actually happened.
A patient goes to the hospital with chest pain. The doctor runs tests, consults an AI diagnostic tool, and sends them home. Nothing serious. Probably stress.
Three days later they are back in an ambulance.
It was a heart attack. The AI missed it. The doctor trusted the AI.
Now here is the question nobody in medicine or law can cleanly answer yet. Who is responsible?
The doctor? The hospital? The tech company that built the algorithm? Nobody knows. And in 2026, that silence is genuinely dangerous.
1st Sponsor
How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads
For its first CTV campaign, Jennifer Aniston’s DTC haircare brand LolaVie had a few non-negotiables. The campaign had to be simple. It had to demonstrate measurable impact. And it had to be full-funnel.
LolaVie used Roku Ads Manager to test and optimize creatives — reaching millions of potential customers at all stages of their purchase journeys. Roku Ads Manager helped the brand convey LolaVie’s playful voice while helping drive omnichannel sales across both ecommerce and retail touchpoints.
The campaign included an Action Ad overlay that let viewers shop directly from their TVs by clicking OK on their Roku remote. This guided them to the website to buy LolaVie products.
Discover how Roku Ads Manager helped LolaVie drive big sales and customer growth with self-serve TV ads.
The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.
BACK TO POST
AI is already inside your doctor's office
The FDA has authorized over 1,200 AI-driven medical devices. AI tools are now reading your X-rays, analyzing your heart rhythm, flagging your cancer risk, and influencing your treatment plan.
Most patients have no idea an algorithm helped make decisions about their body.
And it gets worse. The FDA's own recall data shows more than 60 AI-enabled medical devices were pulled from use in just over a year. One cardiac monitoring tool was approved without clinical trials and generated widespread false alarms that sent patients rushing to emergency rooms.
Approved. Without clinical trials. For your heart.
So who actually pays when AI gets it wrong
There are four people you could point at.
The doctor, who trusted the recommendation without questioning it. The hospital, which chose to deploy the tool without proper vetting. The tech company, which may have rushed the product to market under investor pressure. And in some legal circles, people are seriously debating whether the algorithm itself should have a form of legal accountability.
China already tested this in court. In March 2025, the first AI medical misdiagnosis lawsuit was heard in Beijing. A patient's family claimed 3 million yuan in damages. The hospital, the AI company, and the doctor all ended up in the dock. The central question the court wrestled with was whether the doctor had been overly reliant on AI.
That question is going to define medicine and law for the next decade.
2nd Sponsor
Unlock The $4 Trillion Rent Roll: Compound Your Wealth Like the 1%
Institutional giants use the $4 trillion rental market to compound millions. Now you can too. mogul offers fractional ownership in elite rental properties with 18.8% average IRR and zero property management required. Secure your share of the wealth Wall Street once kept for itself.
Past performance isn't predictive; illustrative only. Investing risks principal; no securities offer. See important Disclaimers
What you can do right now as a patient
You are not legally entitled to know if AI diagnosed you. Nobody has to tell you unless you ask.
So ask. Before any diagnosis or procedure, ask your doctor directly whether an AI tool was involved in your care. Request written documentation of what was used and what it recommended. If something goes wrong, that paper trail matters enormously.
AI in medicine genuinely saves lives. It catches things human eyes miss. But right now it operates in a legal vacuum. When it gets it right, everyone takes credit. When it gets it wrong, nobody takes responsibility.
That needs to change. And it will. Probably after a series of very painful court cases force governments to finally write the rules.
The only question is how many patients get hurt before that happens.
Catch you next time.



