For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
In recent years, governments around the world competed to become leaders in creating standards for regulating AI. However, governments now tend to ease AI regulation, sometimes risking favouring innovation at the expense of citizens’ rights. Sophie Morosoli, researcher at the AI, Media and Democracy Lab, comments on this tension between regulation, innovation and citizens’ preferences.
Sophie Morosoli

Just a year ago, the European Commission introduced the AI Act as the first uniform legal framework for AI, based on a human-centric and value-driven approach. This initiative aimed to build an ecosystem that benefits everyone, while safeguarding fundamental rights. A year on, however, the EU has shifted this focus. The new AI Continent Action Plan prioritises competitiveness, innovation, and key players like gigafactories, AI startups, and research hubs, leaving the earlier emphasis on public values and fundamental rights behind.  

In a recent article, Sophie Morosoli and co-authors Natali Helberger, Nicolas Mattis, Laurens Naudts and Claes de Vreese explain that citizens do not always support their governments’ decisions to water down AI regulation, and how they would like their governments to exert pressure in order to strengthen it. 

Your article highlights a tension between AI regulation and innovation. Why are governments considering easing regulation, and why do citizens have contradictory preferences?  

‘The reason why governments want to ease regulation is connected to the misleading argument that regulation hampers innovation. Governments argue that reducing regulatory barriers can lead to significant economic and technological benefits, such as attracting investments, creating jobs, and establishing leadership in the global AI “race”. However, citizens who are directly affected by this deregulation push feel very differently about AI regulation. While governments focus on the economic advantages of AI investments, citizens prioritize the protection of their rights and ethical considerations.'

While governments focus on the economic advantages of AI investments, citizens prioritize the protection of their rights and ethical considerations.

'A survey we just conducted across six countries (Brazil, Denmark, Japan, the Netherlands, South Africa, and the US) shows that citizens generally do not favour deregulation. Many individuals are concerned that loosening regulations could compromise safety, human rights, and public trust. They emphasize a strong preference for maintaining regulatory frameworks to protect societal values, ensuring transparency, accountability, and the safeguarding of personal freedoms. Thus, there is a clear mismatch between the decisions certain governments are making for their people versus what people actually want.’ 

What kind of risks do citizens worry about most when it comes to AI systems? 

‘Last year, we conducted a representative survey in the Netherlands, where we asked citizens about their perceptions of risks in connection with generative AI. Our results showed that citizens' biggest worries were that generative AI can be used to make people think that they are interacting with a human, and that these systems pose a risk to privacy because of the data they are trained on. We also found a strong agreement among participants that they think it is dangerous that everyone, including non-experts, has access to generative AI tools.  

We have also conducted qualitative research around AI systems and synthetic content. In this realm, citizens worry a lot about the quality of information. For instance, conversations with Dutch citizens reveal that they see mis- and disinformation as one of the biggest risks connected to generative AI systems. Furthermore, they fear that this type of content evokes polarization and conflicts within societies, which can lead to democratic backsliding.’  

The idea behind easing regulation, is that regulation could slow down innovation. Do you think regulation and innovation are inevitably at odds? 

‘It depends on what is meant by innovation. Is it about financial gains? Is it about technological advancements useful for societies? Is it about independence? I don't think regulation and innovation are at odds. In fact, they can complement each other. I believe that “good” regulation can create an environment that fosters meaningful innovation, by providing stability, setting standards, protecting intellectual property and human rights, ensuring fair competition, increasing consumer trust, and facilitating collaboration. And by meaningful innovation, I mean advancements that address real problems, enhance quality of life, promote sustainability and ethics, be inclusive and accessible, bring transformative change, and be user-centric. It is driven by a commitment to making a positive impact on society and the world, ensuring that advancements benefit everyone and contribute to a better future.’ 

What do you think should happen to involve citizens meaningfully in decision making on AI regulation? 

‘In an ideal world, I would like to see that citizens are recognized as key stakeholders whose rights, values, and daily lives are directly affected by these technologies. This means moving beyond viewing them as passive consumers and instead treating them as active participants in shaping governance. Mechanisms such as citizen assemblies, public consultations, and reserved seats for citizen representatives in advisory bodies can ensure their voices influence decision-making. Transparency about how AI is regulated, coupled with opportunities for public feedback and clear channels to raise concerns, strengthens accountability and trust. Moreover, investing in AI literacy equips citizens to engage critically and constructively. By embedding citizens into multiple steps of the regulatory process, we could ensure that AI governance reflects not only industry interests but also the lived experiences of the broader public.’ 

Isa Verhoeven

S.V. (Sophie) Morosoli

Faculty of Social and Behavioural Sciences

CW : Political Communication & Journalism