27 January 2026
When you type a query into Google, it often shows an AI-generated answer at the top of the page. While this is convenient, the generated answer isn't always correct. Moreover, we don't know the reasoning behind the output of these machine learning models. Therefore, the internals of these models are often called “black boxes”. Ana Lucic, Assistant Professor in AI, is addressing this issue by developing interpretability methods.
These interpretability methods effectively open the black box and try to understand the inner workings of a machine learning model. The biggest challenge, according to Lucic, lies in the evaluation of these methods. Lucic explains: ‘If a model gives a prediction, and you develop a method to explain this prediction, how do you know that your explanation is good? That is unclear. I think that the evaluation step is a large open problem in our research community.’
In addition to her research on interpretability methods, Lucic develops machine learning techniques for weather prediction. While working at Microsoft Research, Lucic was one of the main developers of Aurora, a machine learning model for global weather prediction.
Traditional weather prediction relies on running numerical weather prediction models. This process is time-consuming and costly, because the entire model needs to rerun at each time step. Lucic: ‘When taking a machine learning approach, we train our model on data generated by these traditional models. This allows us to create a machine learning model that mimics the traditional models. Machine learning models are faster to run, so this way, we can make predictions orders of magnitude faster.’
Lucic emphasizes that this approach depends on the output from the traditional models, which are still crucial for weather prediction. Looking ahead, she wants to develop interpretability methods for these weather models. She also aims to stabilise Aurora to make more accurate long-term weather predictions.
Throughout her career, Lucic has switched between working in academia and industry. She joined Microsoft Research right after finishing her PhD. In November 2024, she was appointed as an UvA MacGillavry Fellow with a joined position at the Institute for Logic, Language, and Computation, and the Informatics Institute.
Lucic notes: ‘It's quite an adjustment to leave academia, go to industry, and then come back to academia. There's a lot of balls to juggle, but so far it's been pretty fun’. She highlights a big advantage of working at a university, namely being around people that work on a wide range of topics. ‘Especially being a part of two institutes, you get to hear a lot of interesting ideas. Whereas in industry you are often more focused on one particular topic.’