15 January 2026
When you present an AI model with a simple math problem, like “Susan has 6 apples, and gives Tom 2 apples. How many apples does Susan have left?”, you expect that changing the names in the story will not affect its answer. However, these types of changes can actually worsen the performance of GPT models. While humans excel at flexibly adapting to new situations, AI models like ChatGPT struggle with this.
Why does this happen? Martha Lewis, Assistant Professor at the institute of Logic, Language and Computation (ILLC) and MacGillavry Fellow, explains: ‘The architecture of current AI models is not inspired by human cognition. These models are trained on a lot of data using machine learning, which is why they perform so well. However, this means they perform worse on new data that they have not seen before.’
To develop an AI model that is more human-like and enhance its performance, Lewis is working on integrating human logic and structures into AI, a field known as symbolic AI. ‘We want to combine symbolic AI and machine learning AI, creating models with explainable internals that are theoretically grounded. This allows us to understand exactly how the model works internally and makes it more human-like.’
For current AI models, it’s unknown how they exactly work internally. Lewis: ‘Current AI models are very successful and useful. However, theoretically, we don’t know how they arrive at their answers. It's very important to know how the models work to understand why it gives a certain response.’
Lewis and her colleagues at ILLC are developing methods to get to the bottom of these so called “black boxes” in AI models. The challenges in this research mostly stem from the fact that these methods are still in their early stages, and that the AI models themselves are very large, with complex and widely distributed structures. This complexity is actually somewhat comparable to human brains.
For her research, Lewis collaborates with scientists from various disciplines. ‘I worked with researchers at ILLC and the psychology department. Additionally, I connected with researchers in Edinburgh, resulting in an ACL paper. It’s also great that the ILLC is so close to the Informatics Institute here in LAB42.’
In 2024, Lewis was appointed as a MacGillavry Fellow, which enabled her to kickstart her own research group. In the future, she hopes to develop new models that have more human-like reasoning. ‘I enjoy working on both developing the inner mechanisms of an AI model and understanding what goes on inside the black box of current AI models.’