Still, critics argue that often enough HITL are set up to fail and used as a fig leaf to legitimize predefined decision outcomes. To address this issue, established researchers from the fields of humane AI and behavioral ethics will team up to conduct controlled experiments using the machine behavior approach. The core objective is to develop an AI sandbox model that can provide empirical insights into designing and implementing HITL for effective and responsible AI decision-making.
Project team: