For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
This research group seeks to examine the implications of LLMs for the military legal profession and practice, with an interdisciplinary approach to account for the numerous factors that shape their implementation and use.

Generative AI is entering military institutions, both through dedicated and open-source chatbots. For instance, the U.S. military requires its officers to use GenAI.mil, while the Dutch have also launched their own in-house large language model (LLM) called “DefGPT.” One anticipated application of these kinds of systems is to assist in providing legal advice.  Commanders or operators could use chatbots to consult the State's legal position, receive legal assessments regarding a planned military operation, or generate a post-action report that includes legal conclusions.

At the same time, growing awareness of the limitations and vulnerabilities of generative AI raises serious concerns about relying on such tools for legal assessments of high‑stakes military decisions. These concerns are amplified by the difficulty of designing human–machine interaction in a way that ensures human decision‑makers retain meaningful control over critical choices throughout the AI lifecycle. Moreover, any use of AI legal assistants in conflict zones must take into account the realities of warfare – such as incomplete information and extreme time pressure – which shape how users understand, trust, and engage with these systems.

Examining the use of, and the risks posed by, AI legal assistants within military institutions requires an interdisciplinary approach to account for the numerous factors that shape their implementation and use in military practice. This Seed Grant project establishes a new consortium of partners, carefully designed to bring together researchers with unique yet complementary expertise.


Project team:

  • Marten Zwanenburg (Faculty of Law)
  • Klaudia Klonowska (Sciences Po Paris)
  • Jonathan Kwik (Faculty of Law / Asser Institute)
  • Taylor Kate Woodcock (Faculty of Law / Asser Institute)
  • Theo Araujo (Faculty of Social and Behavioural Sciences)
  • Erella Grassiani (Faculty of Social and Behavioural Sciences)
  • Giedo Jansen (Faculty of Law / Sinzheimer Institute)
  • Elke Olthuis (Faculty of Law)
  • Célestine de Zeeuw (TNO)
  • Martijn Wessels (TNO)