According to MPs, any government legislation should center on the possible risk AI poses to human life itself. Twelve concerns that members of the Science, Innovation and Technology Committee said policymakers must address before the UK hosts a historic conference at Bletchley Park include issues with public safety and wellness.
At the conference in November, which will be hosted at Britain’s Second World War code breaking headquarters, Rishi Sunak and other thought leaders will address the opportunities and dangers presented by AI.
The location was essential to the advancement of the technology since it allowed Alan Turing and others to decrypt Nazi communications using Colossus computers.
The committee’s chair, a Conservative MP named Greg Clark, said he “strongly welcomes” the summit but cautioned that the government may need to act with “greater urgency” in order to prevent prospective legislation from being quickly superseded as the US, China, and EU explore their own regulations around AI.
The following 12 issues, according to the committee, “must be addressed”:
- Existential peril – if AI poses a serious threat to human life, as some academics have warned, then regulation must offer national security protections.
- Bias – AI has the potential to both create and reinforce social biases.
- Privacy – AI models may be trained using sensitive data about people or companies.
- Misrepresentation – Language models like ChatGPT could result in content that inaccurately portrays someone’s actions, opinions, and character.
- Data – The sheer volume of data required to train the most potent AI is number five.
- Computing power – Much like the previous point, the creation of the most potent AI requires a lot of computing power.
- Transparency – AI models frequently have trouble explaining why they come to a specific conclusion or where the data came from.
- Copyright – Copyright must be preserved in order to prevent the creative industries from being harmed by generative models that utilize existing content, whether it be text, photos, audio, or video.
- Liability – A policy must determine who is responsible if AI tool suppliers or developers cause harm.
- Employment – Politicians need to foresee how adopting AI would likely affect current occupations.
- Openness – To enable more reliable regulation, foster transparency, and encourage innovation, the computer code behind AI models may be made publicly available.
- International coordination – Any regulation-making process must be carried out on a global scale, and the November summit must include “as many countries as possible.”