Marjolein Lanzing is Assistant Professor Philosophy of Technology at the University of Amsterdam. Marjolein studies the ethical and political concerns related to (AI) technologies, such as algorithmic discrimination, online manipulation, surveillance and privacy.
Not only should we worry about what they mean for the way we understand ourselves and our social relationships, but also for the way in which we lead autonomous and just lives in a democratic society.
Her current research 'Just Not Fair' (financed by NWO VENI 2025-2029) aims at a normative framework for AI and algorithmic surveillance technologies (ASTs), like fraud and policing algorithms but also facial recognition, from the perspective of structural injustice.
Interdisciplinary collaborations and empirical research are inextricably connected to her work, informing the normative claims and frameworks.
Previously, she worked on the Googlization of Health as a post-doc of the ERC project 'Digital Good' (PI Tamar Sharon) at the Interdisciplinary Hub for Security, Privacy and Data Governance (Radboud University). She finished her PhD-research 'The Transparent Self': A Normative Investigation of Changing Selves and Relationships in the Age of the Quantified Self at the 4TU Center for Ethics and Technology (Industrial Engineering and Innovation Sciencees, University of Technology Eindhoven). She did a research fellowship at the Social Sciences Faculty and the Human Rights Center of the University of Ottawa.
Marjolein is board member of Bits of Freedom, an NGO that protects online freedom and (digital) civil rights, board member of the UvA-VU Taskforce AI, member of the Amsterdam Young Academy and proud defense-player of Amsterdam's fiercest soccer team.
Just not Fair: Towards a normative framework for AI and algorithmic surveillance technologies (ASTs) from the perspective of structural injustice
Facial recognition and risk prediction algorithms (ASTs) are increasingly used across social domains. Unfortunately, they can lead to ‘algorithmic discrimination’. ‘Algorithmic fairness’ is an inadequate concept for addressing it. Unlike structural injustice theories, it neglects the social structures in which algorithms are embedded, like institutions. This blinds us to important roots of discrimination such as unequal power relations. Therefore, building on three case studies, this project develops a new, normative framework for assessing ASTs from the perspective of structural injustice.
NWO-Talentprogramma – Veni.
https://www.nwo.nl/onderzoeksprogrammas/nwo-talentprogramma/projecten-veni/veni-2024