Avoiding AI biases: A Finnish assessment framework for non-discriminatory AI systems

Over the last five years, the development of artificial intelligence (AI) has progressed at an unprecedented rate. Machine learning algorithms promise to improve education, healthcare, recruitment and many other services. This is based on the potential of AI systems to effectively personalize services for individuals. However, algorithms do not only generate societal benefits but potentially threaten equality and non-discrimination as well.

 

In the context of automated decision-making, biased or discriminatory training data can lead to discriminatory outcomes rather inconspicuously. In addition to unrepresentative data, the process of algorithmic decision-making can itself be biased. What makes discrimination particularly problematic in the context of artificial intelligence is that algorithms are often opaque and inexplicable from the user’s perspective.

 

The discriminatory structures and outcomes produced and maintained by AI systems are one of the major challenges to be addressed by public administrations. Finland’s current government programme notes the risk of discriminatory artificial intelligence and the need to create guidelines for the ethical use of AI. Avoiding AI biases: A Finnish assessment framework for non-discriminatory AI systems is a research project that responds to this need through a multidisciplinary lens.

 

The research project aims to extensively map the risks to fundamental rights and non-discrimination in machine learning-based AI systems, that are either in use or planned to be used in Finland. The project assesses the fundamental rights impacts of AI in different contexts such as social services, finance and health care. The main goal of the project is to develop an evaluation framework to ensure the non-discrimination of AI systems in different use cases. The aim of the assessment framework is to support the implementation of the Finnish Non-Discrimination Act and its obligation on public authorities to advance equality (sections 5-7) in the context of AI systems.

The project will:

–  Research which kind of machine learning-based AI systems are widely used in Finland, what kind of impact assessment they are based on and what discriminatory effects they could have (first findings available here)

–  Critically evaluate the discriminatory and fundamental rights effects of algorithmic systems, taking into account the obligations imposed by the Non-Discrimination Act

–  Based on the research, develop an assessment framework to identify and avoid discriminatory effects of AI systems

– Develop policy recommendations to utilize the assessment framework by employing participatory stakeholder collaboration

The project is part of the Government’s analysis, assessment and research activities (VN TEAS, 148 800 €). The other members of the consortium led by Demos Helsinki are the University of Turku and the University of Tampere. The project will run from spring 2021 to summer 2022.

 

For more information:

Atte Ojanen, atte.ojanen@demoshelsinki.fi, +358 50 917 7994

Johannes Mikkonen, johannes.mikkonen@demoshelsinki.fi, +358 40 569 4948

Anna Björk, anna.bjork@demoshelsinki.fi, +358445085404

 

 

Feature Image: Joshua Sortino / Unsplash