Dear Editors,
I read about “positive, negative and worrisome consequences of AI in the medical field” in the last MJCMP issue.
It is a very informative article about the current status of AI vs. the medical field.
I also learned various mishap news from professionals including the medical domain when they use AI.
They quoted some AI programs still lacking accuracy percentage when they seek answers for their curiosities.
Some evidence showed dangerous and professionally unacceptable responses encountered by the users when they seek AI as the sole digital answering machine.
We should be aware of the risks.
The problem is the so called LLM or Large Language Model used by most AI software. The process receives massive data input, right or wrong, with output of facts or figures(limitations).
AI still needs time to compete with or replace the naturally gifted human brain functions. AI lacks empathy, sympathy and ethical standards for holistic, individualized healthcare.
Nevertheless, AI can help the daily clinical practice of medical professionals, but final decision must be made by humans.
Sincerely Yours,
Dr. Zay Ya Aye




