Featuring Marloes Geboers and Gabriel Pereira
This contribution presents an ongoing experiment with generating short AI videos entirely on a local machine. Starting from images sourced from training datasets, the workflow chains together small, locally-run models (image description, story generation, text-to-video, voice narration, and music) to produce 40-second video outputs. The system was developed through "vibe coding," an iterative process of building with AI that shaped both the technical system and the creative inquiry.
The resulting videos are sloppy, uncanny, and narratively strange — squarely within the genre of "AI slop." Rather than dismissing slop from the outside, this project approaches it from within, asking what becomes visible when you actually tinker with the infrastructures behind automated content factories. Two lines of inquiry on the politics of algorithmic culture emerge. First, how GenAI remains haunted by its source material: people's photos scraped and remediated into training data, given a strange afterlife through automated storytelling. This reflects a new moment for digital visual culture: where earlier critical dataset studies could grapple with bounded corpora like ImageNet, current GenAI data sets operate at scales that resist human comprehension. Second, how these workflows connect to a longer history of automated artistic processes, from Warhol's Factory to Sollfrank's net.art generator, where labor is delegated to systems as a critical reflection on hegemonic mass media infrastructures.
Crucially, the malleability of these processes reveals openings: if data sets are the result of all the waste of our digital lives, could slop be a radical form of reclaiming it?
The image on the right emerges from the synthetic translation of war images circulating and amplified on social media (original images, top row). In the atmosphere of the canvas, one could easily catch the vibe of Gotham City (or whatever movie the viewer ‘feels’). The dark-hooded figures are repeated versions of a figure in one of the underlying synthetic images. The synthesized image absorbs the brutal immediacy of war and reconfigures it into a sci-fi aesthetic of looming danger. What re-emerges is a “likely likeness of war”.
There is extensive scholarly attention to how past images haunt synthetic imaginaries; what is not often made explicit is the role of algorithmic amplification and its platformed synergies with earlier image classification. Machine vision translates sensory experience into discrete units (Stiegler) that can be counted and recombined. “What gets counted, counts” (a much-circulated quote probably not by Einstein) is also true for how war is inscribed in computational registers of seeing. Categories and weightings are co-constructed by the preferences of business models and networked publics. In this way, the canvas’s imagination of war as a threat that needs to be handled by faceless ‘knights’, personifying both hero and villain, not only absorbs and recycles visual pasts but also reflects and amplifies accumulated preferences co-constructed by platform environments that require war to be mediated in digestible (cinematic distancing) and non-disruptive ways (sustain attention).
My work aims to elevate earlier grammatisation and collective amplification across social platforms in the discussion of haunting in generative AI contexts. Given the malleability of these processes, as pointed out by Gabriel, I ask: (how) can we observe “past data grammars”, and can we push back?
This Peptalk is moderated by Aybüke Özgün