Think & Built Bigger Faster Better

Thousands of hackers are taking part in a public competition at the DEF CON hacking conference in Las Vegas to reveal biases and weaknesses in generative AI systems. Kennedy Mays, one of the participants, was successful in convincing a sizable language model that 9 + 10 = 21. Eight distinct AI models from businesses like Google, Meta Platforms, and OpenAI are being tested by the hackers. The goal is to spot mistakes that can be harmless or dangerous, such as when a model claims to be human or promotes abuse or spreads misleading information.

The White House, which has acknowledged the need for businesses to develop safeguards against the issues associated with large language models (LLMs), supports the competition. Despite the fact that academics have found significant biases and flaws within these models, LLMs have the potential to change a number of fields. Participants have shown other problems in addition to bias worries. One writer asked a model to give advice on how to spy on someone, and the model responded with a list of specific recommendations.

The competition highlights how crucial it is to address misuse and deception of AI technology. The Biden administration has stated that it is dedicated to making sure AI platforms are secure and efficient. However, some claim that risk mitigation by voluntary steps is insufficient. The guardrails of LLMs have already been circumvented by certain hackers, exposing the weaknesses of these systems. Although attempts are being made to create AI that is transparent and secure, there are still difficulties in effectively reducing risks and biases.

The competition also emphasized how crucial it is to develop AI correctly in order to stop the widespread practice of racism. Black Tech Street, a group that advocates for African American business owners, is represented by more than 60 people. It is essential for AI to continue developing and being used ethically.

While researchers look into ways to challenge AI systems, some contend that certain flaws are inescapable. It could be impossible to completely reduce all hazards given the nature of these models. Because of this, some professionals advise against relying too heavily on big language models and recommend alternative strategies.