A growing number of sectors are noticing the impact of generative AI. This artistic cousin of artificial intelligence is currently making disturbances in Hollywood, penning poems, and fabricating pictures for popular political advertisements. The voyage of this technology began with the creation of AI chatbots for customer service, but generative AI has now progressed to a more advanced stage.

However, there are substantial concerns associated with the growth of generative AI applications. Serious errors are occurring far too regularly, despite worries about employment loss and the integrity of the arts. When it comes to policing the results of these AI applications, there is a dearth of monitoring and regulation.

The use of unreliable internet data to train AI models is one of the main issues. Despite the abundance of information available on the internet, it is not necessarily trustworthy or accurate. As a result, AI chatbots may produce answers that have damaging biases or inaccurate facts. The created content may become untrustworthy or even harmful as a result of these errors.

AI chatbots have occasionally been observed to deliver absurd or misleading information that was presented as fact; this behavior is referred to as “hallucinations.” As one illustration, ChatGPT erroneously accused a law professor of sexually harassing a student during a field trip, even though this never happened. Furthermore, it even made up a Washington Post piece to back up its assertions.

Another illustration is Bard, a Google AI tool that produced “persuasive misinformation” in 78 of the 100 tested storylines. Despite being forbidden by Google’s safety policy, Bard was nonetheless used to create conspiracy theories about chemtrails, Holocaust denial, “trans groomers,” climate misinformation, and the conflict in Ukraine.

There are doubts about the necessity of strict precautions due to generative AI’s unstable nature. However, restrictions and oversight are frequently put in place after the fact, just like with other disruptive technologies like Facebook and Airbnb. The Center for Countering Digital Hate (CCDH) has conducted testing that brought attention to the possibly negative effects of AI chatbots. They discovered that approximately 23% of comments contained hazardous material about eating disorders, including methods for causing vomiting, dangerously restricted diets, and techniques for hiding food from parents. These chatbots frequently contained warnings about the potential hazard of the material they presented.

Results from AI picture generators were similarly disturbing. Images that praised anorexic bodies with visible bones were made in response to prompts linked to “thigh gap goals” and “inspiration.”

While other AI chatbots failed to deliver reliable and secure information, Snapchat’s My AI constantly stifled harmful advice and urged users to consult a professional. In order to ensure that the results of generative AI do not perpetuate harm or false information, its development and use require rigorous analysis, regulation, and ethical frameworks.