For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Although AI models are used worldwide, we don't know exactly how they get to their answers. Ana Lucic, Assistant Professor in AI, creates methods to understand the internals of these models. She also develops machine learning techniques for weather prediction. In 2024, Lucic was appointed as a UvA MacGillavry Fellow, holding a joint position at the Institute for Logic, Language, and Computation, and the Informatics Institute.
Portrait picture
Ana Lucic

When you type a query into Google, it often shows an AI-generated answer at the top of the page. While this is convenient, the generated answer isn't always correct. Moreover, we don't know the reasoning behind the output of these machine learning models. Therefore, the internals of these models are often called “black boxes”. Ana Lucic, Assistant Professor in AI, is addressing this issue by developing interpretability methods.

These interpretability methods effectively open the black box and try to understand the inner workings of a machine learning model. The biggest challenge, according to Lucic, lies in the evaluation of these methods. Lucic explains: ‘If a model gives a prediction, and you develop a method to explain this prediction, how do you know that your explanation is good? That is unclear. I think that the evaluation step is a large open problem in our research community.’

Weather prediction

In addition to her research on interpretability methods, Lucic develops machine learning techniques for weather prediction. While working at Microsoft Research, Lucic was one of the main developers of Aurora, a machine learning model for global weather prediction.

Traditional weather prediction relies on running numerical weather prediction models. This process is time-consuming and costly, because the entire model needs to rerun at each time step. Lucic: ‘When taking a machine learning approach, we train our model on data generated by these traditional models. This allows us to create a machine learning model that mimics the traditional models. Machine learning models are faster to run, so this way, we can make predictions orders of magnitude faster.’

Lucic emphasizes that this approach depends on the output from the traditional models, which are still crucial for weather prediction. Looking ahead, she wants to develop interpretability methods for these weather models. She also aims to stabilise Aurora to make more accurate long-term weather predictions.

Back to academia

Throughout her career, Lucic has switched between working in academia and industry. She joined Microsoft Research right after finishing her PhD. In November 2024, she was appointed as an UvA MacGillavry Fellow with a joined position at the Institute for Logic, Language, and Computation, and the Informatics Institute.

Lucic notes: ‘It's quite an adjustment to leave academia, go to industry, and then come back to academia. There's a lot of balls to juggle, but so far it's been pretty fun’. She highlights a big advantage of working at a university, namely being around people that work on a wide range of topics. ‘Especially being a part of two institutes, you get to hear a lot of interesting ideas. Whereas in industry you are often more focused on one particular topic.’