Policies against algorithmic discrimination still lacking

The deployment of AI systems in the public sector is still in its infancy, but algorithmic discrimination has been identified as a growing risk. The first findings of research project ‘Avoiding AI Biases‘ show that tools to assess and address discrimination are still lacking.

The challenge: Algorithmic discrimination threatens fundamental rights

Finland has a reputation as a pioneer in digital innovation, ranking third on the Oxford Insights global Government AI Readiness Index for 2020. Indeed, the share of Finnish companies using AI has more than tripled in just a few years. However, artificial intelligence and automated decision-making systems threaten to exacerbate existing social inequalities and discrimination through the use of biased training data and algorithms. 

In the US, recruitment algorithms have been found to favor men over women based on past hiring decisions. In the Netherlands, a court banned the ministry’s use of the SyRI algorithm because of its lack of transparency and violations of the fundamental rights of disadvantaged citizens. Indeed, the risk of discrimination in AI systems is one of the major challenges that public administrations ought to address

It is precisely these risks that Demos Helsinki, the University of Turku, and the University of Tampere are investigating in the project Avoiding AI biases: a Finnish assessment framework for non-discriminatory AI applications

First step: Identifying AI systems in Finland

In the first part of the project, researchers from the University of Turku conducted a national mapping of AI systems currently in use within Finland. They based it on interviews that they conducted between spring and summer of 2021. The interviews covered the sectors where the risk of algorithmic discrimination was seen as most prominent: public organizations and authorities, the financial sector, healthcare, and the recruitment sector. The interviews examined: 

  • the organizations’ views on the ethical development of AI 
  • their expertise in non-discrimination 
  • the design, development, and testing processes of AI solutions 
  • whether they have established practices or tools to assess or prevent discriminatory impacts 

Key findings

1. Deployment of AI systems is still modest

In the Finnish public sector, the deployment of AI is at a pilot stage and this situation may continue for the next 2-4 years. In the private sector, the majority of AI systems operate in niche areas or concern ethically less risky applications—such as production and process optimization. An important question is to which extent the use of AI is split between self-developed AI systems or external services purchased. 

The modest level of AI uptake so far provides a valuable opportunity to prepare for possible risks of AI-based discrimination in advance. After all, all the organizations surveyed were in the process of developing their AI systems in the near future.

2. Algorithmic discrimination has been recognized as a risk

Organizations are aware, albeit superficially, of the potential discrimination associated with AI systems. All of the organizations interviewed were able to identify some of the measures taken to address discrimination. However, for many organizations, the actions were of a general nature and limited to discussion and training on non-discrimination. There is little consistent use of different means or tools to prevent discrimination. For example, different evaluation frameworks or checklist-type tools were not well-established in the organizations surveyed.

3. There is no well-established cooperation between authorities against discrimination in AI

In the area of data protection, organizations already have well-established cooperation with the Data Protection Ombudsman of Finland. However, in the area of non-discrimination, no such partnerships or models of cooperation have yet been established. Cooperation with the data protection officials was most often conducted under the heading of risk or impact assessment. 

It was identified that the Non-Discrimination Ombudsman could play a similar role on issues related to discrimination in AI, but this would require more resources from the Ombudsman. Cooperation with universities or research institutes on the ethical application of AI had not established its place within organizations either.

4. Significant differences between public and private sector requirements

There are significant differences in the obligations of public and private organizations, with public authorities having an explicit responsibility to promote equality. On the other hand, the boundary between private and public services can be seen to blur as technology evolves, as in the case of mass digital platforms. The distinction between private and public is also difficult to draw in the case of externally purchased AI systems and combinations thereof, which can also be used by public actors. It will be important to observe how this distinction between public and private actors evolves with AI technologies.

5. Global production chains of AI systems are a challenge to non-discrimination

In the years to come, the private sector will make extensive use of AI systems sourced from the global market, even though these systems may be relatively opaque—often “black boxes”. Similarly, public organizations and authorities are not completely immune to the problems of non-transparent and externally purchased algorithms. The challenge is to keep the procurement process well defined and auditable. This needs to happen across different types of procurement and organizations, including for more complex systems. The European Commission’s AI Act on high-risk AI applications and other international assessment frameworks may be one key to addressing this.

In addition, the insurance sector and security authorities stood out in the interviews as sectors where research needs to be extended. Both sectors have important links with citizens’ fundamental rights, but their practices are relatively opaque. Moreover, the issue of structural discrimination in AI emerged. As algorithmic discrimination does not comply with standard grounds for discrimination, historical social inequalities need to be addressed as well. The indirect and intersectional discrimination inherent to AI biases also poses challenges for existing legislation.

Next step: the risks of discrimination

The project will continue with a second part, led by the University of Tampere. Based on the national mapping, researchers will look deeper into the risks associated with learning algorithms systems. Key parts of the research will focus on the technical and social origins of the discriminatory effects and main risks in the development and use of AI applications. These include issues related to the representativeness and accuracy of training data for algorithms, especially in relation to prohibited grounds of discrimination.

The mapping of AI systems in Finland provides important case studies for the research. Issues related to discrimination, in particular the promotion of equality, may appear quite differently in various social contexts. The findings of the national mapping will also add to the knowledge base for the formulation of a Finnish assessment framework and policy recommendations at the end of the project. 

The full results of the mapping presented here will be available in the final report of the project in summer 2022.

 

More information on the study and the project:

Juho Vaiste, University of Turku, juho.vaiste@utu.fi

Atte Ojanen, Demos Helsinki, atte.ojanen@demoshelsinki.fi

 


 

Feature Image: Marcus Spiske / Unsplash