Think & Built Bigger Faster Better

The field of artificial intelligence (AI) is evolving quickly, and users are conducting experiments. Now, medical advice is also being sought using generative AI techniques.

The National Eating Disorders Association (NEDA) of the US recently removed the artificial intelligence chatbot “Tessa” after receiving reports that it was giving its customers bad advice. The chatbot was developed as a separate program rather than to replace the helpline, according to a statement from Liz Thompson, CEO of NEDA. The chatbot is not managed by ChatGPT and is “not a highly functional AI system,” according to Thompson.

Users can access a range of information from AI health advice, including symptoms, diagnoses, and even treatment and prevention. These health suggestions from the Large Language Models (LLMs) are presented as if they are human-written tips using a variety of videos, articles, and reels, which gives them a more genuine appearance. The use of these AI health recommendations does have some potential hazards, though.

Chatbots offering medical advice from AI doctors

Another AI-generated movie describes a “remedy” for maintaining healthy gums and teeth. These are a few of the many AI videos that offer advice on natural cures and, in some cases, symptoms and treatments for specific ailments.

“The remedies displayed in the films are all tried-and-true treatments. Before seeing a doctor, many people utilize it as their primary treatment. I don’t believe there is sufficient study or data to support these health recommendations. It is dangerous to blindly follow anything. We should embrace medical expertise that may be customized for the patients rather than relying on AI’s fed brilliance, according to Dr. Vandana Kate, president of the Indian Medical Association (IMA), Nagpur.