For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
This theme started with the moonshot project 'Towards an AI4Society Sandbox’. Sandboxing is a way to test (un)desired effects of new software by using it in a simulation of the safe production or user environment. This method will be extended to develop future scenarios and probe technological and regulatory solutions discursively for their legal, societal and ethical implications. The project investigates how such a test environment should be designed to ensure AI-technologies for widest benefit of society.

The researchers form an interdisciplinary team that will work on test cases in the field of digital infrastructure, AI regulation and the impact of AI-driven applications. Together with the steering group, they are taking the first step in designing a collaborative, interfaculty AI4Society Sandbox platform. They also set up a network and community of stakeholders, consisting of researchers, students, citizens, civil servants and policymakers.

Debraj Roy (FNWI) investigates a series of mechanisms to understand the long-term impact of digital transformation on inequality, polarisation and exclusion in our society. He develops a computational framework that can provide guidelines for collectively beneficial algorithms.

Tanja Ahlin (FMG) investigates design and use of social robots for older adults. Using ethnographic methods, the project explores how social robots gather information through interacting with their users and what happens with the acquired data. At the core of this case study is the question of AI regulation: should AI systems - especially those that are targeting people with various levels of cognitive (dis)abilities such as dementia - be regulated, and if so: how?

Rocco Bellanova (FGw) focuses on 'The European Union's regulation of AI in the field of public security'. High tech solutions in the domain of counterterrorism, surveillance and profiling have a huge impact on public security and our societies. Defining sound accountability principles for AI in the field of law enforcement is crucial. The EU plays a key-role in advancing a regulatory framework. This project therefore focuses on the new mandate of the European Agency for Police Cooperation – Europol, as well as its initiatives with regard to technology innovation and its governance.

Joanna Strycharz (FMG) focuses on personalization algorithms used in online communication platforms. Via the study of the black box of algorithmic communication, she sets up the AI4Society Sandbox with focus on the topic of sandboxing to assess impact of personalization algorithms on individuals and society.