For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Although text-to-image generation is rapidly advancing, these AI models are mostly English-centric. This increases digital inequality for non-English speakers. Researchers at the UvA Faculty of Science have now created NeoBabel, a pioneering AI image generator that understands six different languages. By making all elements of their research open source, anyone can build on the model and help push inclusive AI research.

When you generate an image with AI, the results are often better when your prompt is in English. This is because many AI models are English at their core: if you use another language, your prompt is translated into English before the image is created. However, most people worldwide are not native English speakers, which puts them at a disadvantage.

Meanwhile, text-to-text generators can speak over 200 languages fluently. That’s why researchers from the UvA Informatics Institute teamed up with Cohere labs, a company specialised in text generation. The research team integrated an image generation system in these text generators, creating an advanced multilingual image generator. The image generator, named NeoBabel, currently supports six languages: English, French, Dutch, Chinese, Hindi, and Persian.

Completely open source

Most image generation models are built by a few large U.S. companies, who rarely reveal all the details of their model. Cees Snoek, full professor in computer science and part of the NeoBabel research team: ‘Usually, most of the work is closed source, so we cannot see exactly how the model works. We don't know if there are biases in the data, how the system was created, and how it can be improved. This goes against our academic principles.’

In contrast, alongside a paper publication about NeoBabel, the research team has made all their code and data public. Mohammad Derakhshani, PhD student and first author of the paper: ‘Personally, I wanted to build a tool for scientific exploration, and for that you need the full research pipeline. We made the entire pipeline public, so anyone interested in this field has all the information they need.’

Cees Snoek
Mohammad Derakhshani

A table and a bear

NeoBabel performs as well as imaging models in English but easily outperforms them in the other five languages. Competing models first translate prompts to English, whereas NeoBabel generates images directly from multiple languages. Snoek explains: ‘Translations lose the nuances of language and culture, because many words lack good English equivalents.’ An example of such a mistranslation can be seen below, where the prompt requested an image of a table and a bear.

The prompt requested in Dutch an image of a table and a bear. In Dutch, a bear is a "beer", which confuses most image generators.

The researchers also improved the labeling of the data used to train the AI model. They used multilingual language models to translate image labels into multiple languages and made those labels more descriptive. Snoek: ‘This allows us to train our model in all these languages simultaneously. For each language, it learns the connection between the words and the pixels.’

By improving the data, the AI model is also smaller than other competing models - in technical terms, it has fewer parameters. Additionally, the researchers expanded the publicly available dataset of image-label pairs from 40 million to 124 million. Derakhshani: ‘This amount of data is usually not publicly accessible. We scaled up the dataset massively, even though we had limited computational power.’

Towards video

NeoBabel opens up a wide range of applications, including a multilingual creative canvas. On this digital canvas, multiple users can “paint” on the same image, each using their own language. Derakhshani explains: ‘If I only speak Persian and you only speak Dutch, we can co-create an image without using English. You might generate the first version in Dutch, and I can then mark a region and describe the changes in Persian. The model adapts the image accordingly.’

According to Snoek, the next step for NeoBabel is creating culturally specific images. However, this requires culture-specific data as well as greater computational power. ‘We could accomplish much more with a more substantial computational infrastructure,’ Snoek says. ‘These AI models don't have to come from large industry labs. The creativity is here, but we lack the resources to demonstrate it.’

The researchers are therefore seeking collaboration partners. In the long term, they would like to expand NeoBabel to video creation. Snoek: ‘My dream would be for it to be able to generate videos as well. There is a large television archive in Hilversum, “Beeld en Geluid”. It would be really great to collaborate with them to generate Dutch cultural videos.’

Links

NeoBabel webpage on GitHub

Paper NeoBabel: A Multilingual Open Tower for Visual Generation (arXiv), Mohammad Mahdi Derakhshani, Dheeraj Varghese, Marzieh Fadaee, Cees G. M. Snoek

Prof. dr. C.G.M. (Cees) Snoek

Faculty of Science

Informatics Institute