“Dubrovnik Ghost Series” / Neural Networks as a Diffractive Medium (Part 1)
“How can you shift.. from a dimension of decomposition to a dimension of recomposition?”
– Franco Berardi, E-Flux lecture
AI / Human relationships are surely now endemic to our naturalsocial ecologies, and so to the work and play that must be engaged in both old and new ways.”
– Donna Haraway, Artificial Imperfection MoMa lecture
–
GAN generated computational imagery has reached a level of ubiquity in recent months and years, rising to prominence within the mainstream media (“These people do not exist”) and more recently the world of high art (Obvious @ Christie’s). As a visually impressive implementation of neural network technology, there has been rapid development in this area that has led to results which in some cases look indistinguishable from real images of people, places and things.
Perhaps due to the rapid rate of engineering progress currently there lacks a critical reflective discourse around GANs, and neural networks more broadly, as a creative artistic medium. In this paper I aim to explore the affordances and entanglements neural networks offer from an art practice perspective. This research traverses a variety of topics including posthumanism (Hayles), diffraction (Haraway, Barad), moving into notions of “haunted data”, (Barad, Blackman, Berardi, Fisher). I will discuss contemporary artists utilising AI and neural networks in their practice and attempt to situate my research and practice within this field.
As a basis for this research I have undertaken a practical exploration of the technology. Specifically, I have created what Lisa Blackman might term “haunted data” (2019) – a “ghost” of my father’s creative spirit – the way he used to paint landscapes – and through this process been able to travel with him on a final painting trip together, a “lost future” we were unable to share while he was alive.
–
Semantics
At the outset, a brief semantic explainer. Generative Adversarial Networks (GANs) are a subset of neural networks which are optimised for high fidelity image generation (Goodfellow, 2014). While I will focus on this technology specifically as it relates to my practice, I will incorporate discussions of neural networks more broadly as this is the key area of technology which has most ubiquity and influence in contemporary “digital” life. Google, “the most perfect coloniser of all time,” according to Franco Berardi, has announced that the company’s focus will be machine learning (utilising neural networks) and it must be stated that working with neural network technology means working with tools largely made by Google.
Neural networks are computationally intensive algorithms that, at a most basic level, recognise patterns. Based on synaptic plasticity, the manner in which neurons in a brain connect and strengthen over time to create memories, neural nets build a “model” that attempts to best describe a training dataset. The dataset may be images, sounds, text, weather patterns, statistics; anything that may be regularised into a parsable syntax and fed into the network.
Given the epistemological heritage of the term “neural network”, it’s intuitive that this type of technology is naturally suited to tasks involving artificial intelligence and human-computer interaction. If the technology itself is built to function like a brain, then a natural theoretical starting point might be the posthumanist perspective.
In her book “My mother was a computer,” N. Katherine Hayles (2005) explores “different versions of the posthuman as they continue to evolve in conjunction with intelligent machines” (2005, p. 2). Moving from a liberal humanist / post humanist dichotomic position Hayles suggests “this stark contrast between embodiment and disembodiment has fractured into more complex and varied formations” (2005, p. 2). This idea of breaking down dichotomies is interesting, as it gives us a first insight into how neural networks may be situated in a creative sense.
A constellation of zeroes
Before training begins, neural networks are a constellation of interconnected digital neurons – numbers between 0 and 1 – with no imprinted values encoded within: a blank slate. The shape of the neural model that forms is imprinted through the repetitive analysis of training data. Neural networks bring materiality to what Hayles describes as intermediation – “the entanglement of the bodies of texts and digital subjects” (2005, p.p 7).
In practice, this training process highlights the most important aspect of neural network architecture – the reliance on “quality data” in order to produce an expected outcome. There are many stories related to neural networks that emphasise problematic results from flawed or biased data.
The goal when pursuing “best practice” neural network building is to have a massively large and robust dataset, preferably with millions of correctly categorised entries that give a balanced reflection of every possible pattern the network will be expected to analyse in the future. Google’s BigGan is (as of early 2019) considered one of the best implementations of a GAN model – it was trained on an image database of approximately 14 million images and its results are often virtually photographic.
However, through practice based research I have found that a key “characteristic” of neural networks – and GANs in particular – is the value in training a network on a much more limited dataset, and seeing what emergent properties the network reveals.
This article continues in the next blog post..