ChatGPT Health
- 8 hours ago
- 1 min read
Last year, at a Microsoft event, a hospital showcased an AI application developed specifically for doctors.
They had built an assistive AI tool that compiled summaries of incoming patients based on their historical health data. Since healthcare is such a sensitive field, we naturally asked a ton of questions—like, "Is it making decisions instead of the doctor?"—but they heavily emphasized its purely supportive role.

It made a lot of sense to me then, and I still hold the same opinion today: for LLMs to truly become widespread, they need to specialize vertically. We need sector-specific, expertise-specific, and operation-specific LLMs.
Of course, given that we are talking about healthcare, you'd think an offline SLM (Small Language Model) might be a better fit, but...
"Who cares about the risks, right?" As long as it gives us an answer!
Even if OpenAI claims your data is stored in a private, highly secure environment, it reminds me of when Instagram first launched its "Stories" feature. Originally pitched as disappearing after 24 hours, it shocked everyone months later with the "Story Archive." Turns out, Meta hadn't deleted a single story! :)
Anyway, this actually shows us that, at least for a while, we are going to see more specialized versions of LLMs alongside models being fed with new broad data. Maybe this will give some peace of mind to those who are terrified of the hallucination problem, huh?
For those who want to dive into the details: https://openai.com/index/introducing-chatgpt-health/

Comments