The idea of artificial intelligence evolving into consciousness has long been explored in science fiction. For example, consider HAL 9000, the supercomputer turned villain in the 1968 film 2001: A Space Odyssey. That idea is becoming less and less outlandish as artificial intelligence (AI) develops quickly, and it has even been acknowledged by top AI experts. Ilya Sutskever, the head scientist of OpenAI, the firm that created the chatbot ChatGPT, for example, suggested on Twitter last year that some of the most advanced AI networks might be “slightly conscious.”

Although many academics claim that AI systems have not yet reached the stage of consciousness, many wonder how humans would know if they had. 19 neuroscientists, philosophers, and computer scientists have created a checklist of requirements that, if satisfied, would show that a system has a strong likelihood of being aware in order to provide an answer to this question. Prior to peer review, they posted their draft guide to the arXiv preprint repository earlier this week. According to co-author and philosopher Robert Long of the Center for AI Safety, a research non-profit in San Francisco, California “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” which is why the writers started the project.

The team claims that there are significant moral ramifications if an AI system is not detected as having developed consciousness. According to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that an entity should be treated” if anything is given the name “conscious.”

Long continues by saying that, as far as he can tell, not enough effort is being made by the businesses developing sophisticated AI systems to assess the models for consciousness and make strategies for what to do in that scenario. “And that’s despite the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds.

Nature contacted Microsoft and Google, two of the top tech companies working to advance AI. Microsoft’s development of AI, according to a company spokeswoman, is focused on responsibly boosting human productivity rather than imitating human intelligence. Since the release of GPT-4, the most recent version of ChatGPT to be made available to the public, it has become evident that “new methodologies are required to assess the capabilities of these AI models as we explore how to achieve the full potential of AI to benefit society as a whole,” the spokesperson added. Google made no comment.