EU: Social scoring should be outlawed by AI regulations

October 10, 2023

A proposal to strengthen the regulation’s ban on social scoring was presented with the European Council and the European Parliament by Human Rights Watch, La Quadrature Du Net, and EDRi. Investigations conducted in France, the Netherlands, Austria, Poland, and Ireland have shown that AI-based social score systems are impeding people’s access to social security benefits, jeopardizing their privacy, and discriminatorily and stereotype-based profiling them. Because of this, it is more difficult for people to make a living, purchase housing, and buy food. The united proposal calls on the EU to pass significant regulatory changes that will address these negative effects and prevent the development of AI-based social scoring systems.

Proposed changes to the EU AI Act proposal – Social scoring prohibition:

Although the AI Act ideas put out by EU officials aim to limit AI-based social scoring, their current wording may permit procedures and frameworks that aid in its growth throughout the EU. 

According to current suggestions, the majority of AI systems used to decide who gets what benefits and services for public assistance would be categorized as “high-risk.” However, a lot of these systems use social scoring to determine whether recipients pose a fraud “risk” that needs to be looked into and, if necessary, penalized. These systems should be prohibited under the AI Act because they cause people to be treated unfairly and negatively depending on their socioeconomic situation and unnecessarily restrict their rights to social security, privacy, and non-discrimination.  

The proposed change to Article 5 tackles the unacceptably high risks created by the extensive, automatically generated social score systems used by European governments and businesses. 

This scoring formula has been labeled by La Quadrature du Net as “a policy of institutional harassment” of individuals depending on their socioeconomic position. The fact that such a system would not be covered by the EC or Parliament’s planned social score bans under the AI Act is concerning, despite the fact that it is a blatant example of social control based on a police logic of widespread suspicion, sorting, and continual evaluation of people’s movements and activities. 

The scoring formula used by CAF is a part of a larger pattern. Other public organizations, funds, and tax authorities in France are creating their own ranking systems. In the Netherlands, SyRI, a now-defunct risk assessment tool created by the Dutch government, accessed work and housing histories, benefits data, personal debt reports, and other sensitive data kept by government organizations to flag individuals for fraud investigations. The government of Austria employs an employment profile algorithm that restricts a person’s access to job support programs and imitates the unfair realities of the labor market. The rights of individuals to social security, privacy, and non-discrimination are all put at unacceptably high danger by all of these systems. 

These instances highlight the increasing use of AI scoring systems that involve the automated, large-scale linkage of unstructured files pertaining to large populations of people as well as the processing of their personal data. These systems should be outlawed because their damages cannot be adequately reduced or prevented by procedural safeguards because they are the product of mass data collecting and discriminatory profiling.   

People’s access to social benefits is unnecessarily restricted by these AI systems, which violates their right to social security. A democratic society cannot use AI-based methods to assess or categorize people as safe or dangerous. It is crucial to admit that if the results of an automated AI review are advantageous for one person, it is bad for others, as several MEPs have already said in the context of the negotiations of the EP report on the draft AI Act. As a result, we should never allow AI to perform social scoring because by its very nature, it results in damages and unfair treatment. 

The proposed Annex III modification would make sure that non-social scoring AI systems used to assess public benefits, associated public services, and health and life insurance benefits would still be categorized as “high risk.” Additionally, these changes would label all credit scoring models as “high risk.” These systems, like the SCHUFA scoring used in Germany, use data that is logically related to a person’s finances, like their history of unpaid loans, fines, and invoices, to provide a score that predicts how likely a person is to fulfill their financial responsibilities. 

Share Us