New research aims to define what “successful AI” means in society (FORSEE)

April 17, 2025

For those involved in shaping the next phase of European AI policy, FORSEE is a research project that offers both a warning and a roadmap: success must be defined before it can be delivered.

FORSEE is a new Horizon Europe research project aiming to close the gap between AI’s technical potential and its societal legitimacy. AI policy and development often focus narrowly on performance, efficiency, or economic growth. Yet the real-world impacts of AI are shaped by the contexts in which it is deployed—contexts that include public institutions, legal systems, and social norms. Without insights into what successful AI means beyond technological or economic efficiency will end up promoting tools that can be misaligned with democratic priorities.

What kind of success are we designing for, and for whom?

This presents a fundamental question for policymakers, industry and different societal groups: what kind of success are we designing for, and for whom? Today, “success” in AI is not a shared concept. Industry, government, researchers, and civil society often hold different and sometimes competing definitions. AI systems can be technically effective but socially harmful—or vice versa. Policymaking must address this complexity early on to ensure AI serves the public interest, not just market incentives or isolated performance goals.

From technical benchmarks to practical policymaking tools

The project aims to enhance the capabilities of the AI industry, policymakers and the public to address the future risks and opportunities of AI. FORSEE approaches this by analysing how various stakeholders define success and how their views intersect or conflict. This will result in a broadened definition of AI success—one that includes conflict resolution, stakeholder empowerment, fundamental rights, and alignment with sustainable development. This makes FORSEE’s findings directly useful for the design of new regulatory frameworks, funding instruments, and governance models across the EU.

Our hypothesis: social context matters

For policymakers, this insight offers a clear rationale: inclusive, transparent processes will improve outcomes and legitimacy. FORSEE applies sociological methods to explore the environments in which AI systems are conceived, built, and adopted. The project uses two established theories—the Social Construction of Technology (SCOT) and the Sociology of Expectations—to show that technologies do not emerge in a vacuum. They emerge from social choices, values, and power relations. This means that more inclusive development processes do not just produce fairer outcomes—they also strengthen the social acceptability and effectiveness of AI tools.

Building institutional readiness for democratic AI

As Europe moves to consolidate its position as a global leader in responsible AI, FORSEE offers a novel approach to AI governance that can effectively steer AI development towards more successful outcomes for all. The project’s work supports institutions at every level—EU, national and local—in developing new ways of steering AI development that reflect public values and enable long-term legitimacy. It recognises that successful AI is not just a technical matter but a question of how institutions are equipped to handle competing priorities and long-term risks. 

A collaborative and multidisciplinary effort

The FORSEE consortium includes eight partners from across Europe, combining expertise in law, sociology, policy, technology, and governance. The project is coordinated by University College Dublin (Ireland), with partners including Tilburg University (Netherlands), Trinity College Dublin (Ireland), TASC Europe Studies (Ireland), Wissenschaftszentrum Berlin für Sozialforschung (Germany), the European Digital SME Alliance (Belgium), Université Paul Sabatier Toulouse III (France), and Demos Helsinki (Finland).

Demos Helsinki leads the project’s impact creation, ensuring that research findings are translated into tools that can shape real policy. We will develop alternative scenarios and future imaginaries for democratic AI, ensuring the project’s results are visible and actionable. We also contribute to the work on structural capacity building in AI governance. Our role focuses on engaging the broader European AI ecosystem, building inclusive communication channels, and coordinating key stakeholder dialogues.

FORSEE will run until January 2028. During this time, it seeks to develop a novel aims approach to AI governance that illuminates conditions shaping successful AI applications and ways to replicate them, an evaluative framework for assessing current and future AI applications and a prototype for registering risk and negative impacts. For those involved in shaping the next phase of European AI policy, it offers both a warning and a roadmap: success must be defined before it can be delivered.

 

 


 

Feature image: iStock/Sashkinw