For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
This interdisciplinary project explores how human-in-the-loop (HITL) interventions can foster responsible designs of artificial intelligence (AI) systems. EU regulation requires private and public institutions to implement HITL frameworks in AI decision-making.

Still, critics argue that often enough HITL are set up to fail and used as a fig leaf to legitimize predefined decision outcomes. To address this issue, established researchers from the fields of humane AI and behavioral ethics will team up to conduct controlled experiments using the machine behavior approach. The core objective is to develop an AI sandbox model that can provide empirical insights into designing and implementing HITL for effective and responsible AI decision-making. 

Project team:

  • Prof. Dr. Shaul Shalvi (FEB)
  • Dr. Christopher Starke (FMG)