Demos Helsinki, the University of Turku and the University of Tampere partnered up to help the Finnish Government to develop a framework for ethical and equitable use of AI.
AI systems can have a positive impact on education, healthcare, recruitment and many other services. At the same time, biased algorithmic decision-making poses threats to equality and non-discrimination.
We have thus completed an assessment framework, which assists developers to establish an equitable journey for the development of AI services in the public sector, from design to deployment.
To put the assessment framework into good use, we have also developed a list of policy recommendations.
The assessment framework and policy brief come as part of the VNTEAS-funded research project, Avoiding AI biases: A Finnish assessment framework for non-discriminatory AI systems.
Through this research project we sought to answer: Which kind of machine learning-based AI systems are in use in Finland, especially in the public sector? What are their possible discriminatory risks and how have they been addressed so far? Which kind of impact assessment methods are most suitable for minimising the risk of bias and discrimination?
Research findings: Algorithmic biases arise from the socio-technical context around AI
First, the research team conducted a national mapping of AI systems in use within the public sector with possible impacts on fundamental rights, based on interviews with key organisations. Second, the researchers performed an in-depth analysis of the discriminatory risks of AI systems, the methods developed to identify and prevent them, and the potential challenges of using these methods. Finally, we co-developed an assessment framework for identifying and managing the risks of algorithmic discrimination and promoting equality in the use of AI. Based on this, we produced policy recommendations to improve the regulation of AI systems.
The national mapping showed that the adoption of AI systems in Finland is still modest. While the public sector is reasonably aware of the discriminatory risks of AI systems, there is no clear model of cooperation between authorities to tackle them. The research also highlighted the different responsibilities of private and public sectors in tackling discrimination and concluded that the global production chains of AI systems pose a severe challenge to non-discrimination due to their lack of transparency.
The research revealed that indirect and intersectional discrimination emerge as key concerns in the use of AI across sectors and industries. Discriminatory biases are created in the value chain of AI systems by the interaction of various social, cultural and technological factors. This necessitates us to look at the broader socio-technical context around AI. While the debate has so far mainly focused on algorithmic bias and discrimination, this research seeks to highlight the opportunities for positive promotion of equality through the use of AI.
Assessment framework and recommendations
To increase the opportunities for equality through the use of AI, we developed the assessment framework. So far, algorithmic impact assessments have gained traction as a way of ensuring the ethical application of AI in a transparent manner, but they have only provided a limited focus on equality and non-discrimination. Our assessment framework, developed for the Prime Minister’s Office in Finland in 2022, combines the evaluation of discriminatory risks of AI systems with the promotion of equality. Thereby it enables governments and public officials to steer technological innovation and development while protecting the fundamental rights of citizens.
Governments cannot simply adopt new technologies and play catch-up. Governments are at the forefront of technological advancements if they develop and establish the right frameworks that can serve society as a whole and not only the industry’s interests. Timing in such cases is important: technology and governance must develop at the same pace. Therefore, to put the assessment framework to good use, we developed relevant policy recommendations, which you can read in full here.
1. Raising the public awareness of algorithmic discrimination
2. Increasing cooperation between different stakeholders in the responsible development of AI systems
3. Promoting equality in the use of AI through proactive regulation and tools
The project was part of the Government’s analysis, assessment and research activities (VN TEAS, 148 800 €), and you can read more about it here.
Resources
- Assessment framework (English)
- Policy brief (English)
- Full final report (Finnish)
- Press release (English)
- Podcast interview (Finnish)
We hope to further the use of the framework for non-discriminatory AI systems in different contexts with a wide array of stakeholders. Please, do not hesitate to contact the head writer of this publication:
Atte Ojanen
+358 50 917 7994
atte.ojanen@demoshelsinki.fi
Anna Bjork
+358 44 508 5404
anna.bjork@demoshelsinki.fi
Johannes Mikkonen
+358 40 569 4948
johannes.mikkonen@demoshelsinki.fi
Read more
- The assessment framework and policy brief are part of a research project called Avoiding AI biases: A Finnish assessment framework for non-discriminatory AI systems, which you can read more about here.
- Back in September 2021, we found that policies regarding AI were still lacking in Finland.
- A more detailed account of how AI systems can be discriminatory can be found here.
- We help governments build joyful digital societies, for example through creating trust with blockchain or building an equitable data economy, and more.