Sans-serif

Aa

Serif

Aa

Font size

+ -

Line height

+ -
Light
Dark
Sepia

Dissertation Prep – All the words

Given the rapid advances in machine learning technologies, what new creative affordances do these mediums offer the artist? Are these technologies actually offering anything new at all? Or are they possibly a symptom of what Franco Berardi termed “cyber-capitalism”.

Building on my earlier research into machine learning (“Dubrovnik Ghost Series: Neural Networks as a diffractive medium”, in this dissertation my goal is to explore further the narrative and speculative possibilities afforded by machine learning systems. In this earlier work I posited that neural networks were diffractive in nature – probabilistic archives based on datasets from past space-time-matterings (Barad), which when intra-acted with in the present provide a lens through which speculative / lost futures (Fisher) may be observed.

When combining this technology with small-data, curated datasets, the inherent weirdness and speculative materiality within the “black box” reveals itself through artefacts and new reconfigurations of known elements. Machine Learning affords a unique way for an artist to traverse the places between, with the technology itself being cast as a spectre (Derrida, Fisher).

Since I developed the earlier project in 2019, machine learning systems have continued to develop at a rapid rate. The current state of the art systems are Multi Modal Machine Learning (MMML), meaning they combine separate mode-specific (text, imagery, different sensory phenomena) models together, forming networks that can translate between modalities.

The most high profile of these text to image generation models are becoming well known: Dall-e2, Stability Diffusion, Midjourney. Recently the company OpenAI (creators of Dall-e2) also released Whisper, a spoken-word to text translation API – allowing independent developers to incorporate this seamless, powerful functionality into their own work.

My primary goal with this work is to push forward the research into machine learning networks as a creative medium. Some of the aforementioned image models are capable of producing results which are both incredibly detailed and yet, from a contextual perspective, completely meaningless. As Mick Grierson during his recent Creative Machines lecture, he foresees these technologies “accelerating mediocrity”. Given that the nature of machine learning is that it is essentially pattern recognition, and pattern replication, to me it seems obvious that the medium tends towards recursion as a default mode. Perhaps it is the perfect medium for an age in which the only (profitable) way to move past remaking Spider-Man five times in a decade is to literally put three Spider-Mans in a single film. Or, perhaps, there’s another way.

My practice is focused on the technological tectonics that shape our everyday lives, often invisibly. I’m curious about how these technologies function, what it means for them to be owned and created by multi-national companies, to cost a million dollars to train on more hardware than any independent researcher might have access to. As an artist working with technology, I feel a responsibility to attempt to understand it both as a system, but also as part of the wider techno-socio-political mileu it lies within. And a critical part of my practice is to use narratives, worlding, fictioning to explore the topic. Dubrovnik Ghost Series was an exploration of the possibilities of machine learning with small data sets, but it primarily about reconnecting with a memory of a person and creating new shared experiences with them in a manner not possible before this technology existed.

Thus my research is heavily practice-based – I iteratively generate experiments and slowly form a picture of how the actual technology itself operates, what affordances it has, what makes it glitch and break, and in parallel to this I consider the philosophical underpinnings of this process and how the two strands may be woven together. I discuss my progress with peers, Eddie Wong and some of my fellow classmates from the 2019-20 cohort, plus the Goldsmiths Staff, particularly Rachel Falconer.

As the dissertation develops my focus will get sharper – this topic is incredibly broad and the technologies involved move so fast that it will be necessary to zone in quite quickly on specifics.

Timeline

November – Mid December

 – Key research starts, reading primary sources

 – Initial technology tests, wide research across the field to see current state of the art

 – Sketch outline for possible discussion before class breaks up in December.

December – Feb

 – Write first draft.

 – Continue technical research

February – Major Draft Due

 – Receive feedback and rework draft.

 – Further develop technical experiments / artefacts / process work

May – Final Delivery