What is the Collingridge dilemma and why is it important for tech policy?

One key dilemma for the governance of emerging technologies is the dual challenge of information and time.

 

In the early stages of technological development, the potential use or consequences of the new technology are either not well known, or known only by those closely involved in development work. As time passes, the effects of the technology on people and societies and choices related to its development become more apparent. At that stage, technology development is possibly harder to govern and regulate due to approaches, applications and structures that have become entrenched. Change becomes more difficult, more time-consuming and more expensive.

 

Figure 1. A representation of the Collingridge dilemma

Source

 

The issue of timing described by the Collingridge dilemma has also been described as a problem of pacing. This can be illustrated through, for example, blockchain technology. Cryptocurrencies, as well as other uses of blockchain (e.g., smart contracts and DAOs), have developed and proliferated at a rapid pace in the past decade. However, our collective decisions about the technology (i.e., the governance of blockchain), have only just started to develop in comparison. In part, this is because of the limited information we had about blockchain early on — resulting in this issue of timing.

 

Why is it important that governance and technology develop hand-in-hand? 

 

First, a lack of clear governance structures and regulation can result in a situation where negative societal consequences of technologies may arise (think of critiques of the Ponzi-like mechanisms of crypto or the economic effects of a massive crypto-bubble burst). 

 

Second, a lack of clear guidelines, governance and regulation could also contribute to an unstable and volatile investment and innovation environment. On one hand, developers might feel hindered by legal concerns and the reputational or ethical exposure that comes with working in a heavily regulated industry. On the other, elected and representative governance bodies are actors that, in principle, have been legitimised to ensure that technology is inclusive, equitable, and transparent. 

 

Of course, as the Collingridge dilemma illustrates, solving this pacing problem is difficult: as we gain new insight and information into the broader impacts of the design and deployment of technologies, certain choices and approaches may already have ossified into norms, market positions or regulation.

 

The Internet: looking back at the history of the dilemma 

The Collingridge dilemma is perhaps best understood through an example we have all lived through — and one we can definitely remember: the development of the Internet. Thirty years ago, we could not have anticipated the many ways in which the Internet would permeate our lives. 

The Internet started out as a major defence project in the US Defense Advanced Research Projects Agency (DARPA), which then became a tool for decentralising and democratising information. This was an unforeseeable result that had a major, positive impact on our societies in the form of democratic freedoms and spaces. 

However, while many thought these spillover effects will only ever be positive, this has not been the case everywhere. Instead, opportunistic — or even malicious — state and non-state actors have used the Internet and its related technologies to reduce democratic freedoms and spaces. Today, disinformation and misinformation have given rise to new problems that could not possibly have been anticipated by governments and the wider public during the rise of the global Internet, 20 years ago. 

Similarly, right now, we may be witnessing the beginning of technologies that could have as much of a transformational effect as the Internet. Yet, the Collingridge dilemma remains. There is uncertainty about the future of how emerging technologies will affect our societies, but there is also uncertainty on the extent that relevant stakeholders can and should influence their course. 

 

What can we do about it?

 

To deal with the Collingridge dilemma, we must change the pace. This doesn’t mean forcing technologies to develop slower, or forcing governance to develop faster (which is a common view). The pace can change if the interactions between the two start early on, are frequent, and constantly enable testing, assessing, and learning. This is what Sabel and Zeitlin (2011) have called experimentalism in governance — or what we term experimental governance. Experimental governance could help us tackle this collective problem.

 

Choosing a more experimental approach to technology governance could in itself be transformative: 

 

(1) Rethink our role in technology development

First, experimental governance provides room for iteration and evaluation of technology and its governance. Without realising, the conversations we have about technology betray an underlying belief that technology develops faster than we can make decisions on it. As a result, regulation always comes later, when we have more information about the true effects (good or bad). Experimental governance encourages us to examine and accept the potential for interaction between technology and governance; it thus empowers us to make choices about what an emerging technology develops into. Development is not an autonomous, singular process but an iterative string of choices. Experimental governance can help in shedding light on how governance decisions and these choices intertwine throughout the process. 

 

(2) Enable collaboration

Second, experimental governance devotes space to open and transparent multi-stakeholder collaboration, increasing the degree of openness and transparency in how decisions are made early on in the development stage. The principles of openness and transparency can nurture new synergies between the relevant stakeholders. Combined with a clear emphasis on inclusion and the capacity building it requires, these synergies and interactions can in turn cultivate higher trust between societies and technologies.

 

By embedding experimentalism into our approach to governance, we may be able to open up the processes of both regulation and policymaking, as well as the development of technology. This openness and transparency through experimentation, when combined with a discussion on accountability, trust and a dedication to true inclusion of various stakeholders might be one avenue for tackling the Collingridge dilemma. Experimental legislation, policymaking, and design all provide references that inform our approach to governance of the world’s current and ever-developing dilemmas.

 

The real issue: no trust

 

The Collingridge dilemma reflects the current impasse between tech and society: a lack of trust. As members of the same society, we should be confident that all stakeholders act for the collective good, while also maintaining open communication and collaboration when there is no agreement. Sounds like a utopia? We think that it’s a matter of input and output

 

We have yet to figure out a trusting, collaborative input to technology development. This results in the Collingridge dilemma: developers work alone, policymakers work alone, and they are missing out on the important information that each group holds. Opening up this interaction could increase trust. At the same time, it could produce technology that is accountable, transparent, equitable and advances the collective interests of a vibrant, diverse and joyful society. 

 


See more of our work on technology and experimental governance