Greener Grass is Growing: Vampiric Technologies and the Anthropocene

Article written for Yale University Art Journal Precog.

As overheated Silicon Valley technocrats, dizzy Hollywood A-listers and celebrity pop scientists unite in collectively hallucinating the impending artificial superintelligence apocalypse known as technological Singularity, the Anthropocene chugs along towards an uncertain end, all the while the mighty Homo Sapiens continues to prove incapable of positioning themselves anywhere but the absolute center of it all, desperately inept at thinking beyond the horizon of their own experience. The narrative architecture of vampires, Frankenstein and other pre-Modern monsters has colored the way humans have been able to imagine not only themselves — in binary opposition to the monstrous other — but has also limited the scope of understanding of non-human sentient actors like Artificial Intelligence to devious replicas of ourselves, expert only in amplifying our own ethical failings.

While K-pop sensation ‘Gangnam Style’ breaks Youtube by garnering more views than can be expressed in a 32-bit integer, the entire digital cloud faces an alleged meltdown in the year 2038 because the operating system running on cloud servers, Unix, utilizes a system clock that counts up in seconds from January 1, 1970, and is also limited to 32-bits (Griffin, 2014). Following the dystopic Singularity hysteria backward in time through culture, can the AI monsters currently speculated by both pop culture and science alike find their way to the Gothic and Modernist conceptions, as part and parcel of Y2K millennium bug style techno-hype phenomena? The fever-inducing frenzy whipped up by these imagined apocalyptic technical disasters has a synergistic pairing with the landscape of cultural references in which Modernist, Gothic and even medieval monsters such as Golem reside. The connection is made tenable via the lurking theism present in any of our anthropomorphic creations — playing God is simultaneously irresistible and uncomfortable. Additionally, humans may find a particularly suitable strawman in AI to offload the burden of their ethical inadequacies.

In tracing the prevailing conception of artificial intelligence to a prehistory in the gothic monster, a trajectory emerges that attenuates AI development to an anthropocentric model. This essay speculates multiple alternative prehistories of synthetic intelligence aspiring to an expanded conception of human and abiological intelligence. How far into the past is it necessary to look? Golem?, the first tool? This essay argues for an expansion in time scale to the claim that anthropomorphic AI started with Alan Turing and his need for AI to pass as human; I argue it that it begins much earlier. In the constructions of man, nature, and Spirit created by the Moderns and the pre-Moderns, how do we locate our desire to simulate human intelligence with machines? A line can be traced along the social and cultural development of artificial intelligence from pre-Modern monster myths to the prevailing skeuomorphic situation that pushes an impoverished conception of synthetic intelligence.

“...If, on the contrary, our constitution authorizes anything, it is surely the accelerated socialization of nonhumans, because it never allows them to appear as elements of ‘real society’...Bizarre as these monsters may be, they posed no problem because they did not exist publicly and because their monstrous consequences remained untraceable.”
Bruno Latour, We Have Never Been Modern, 1991

The questions surrounding the development of Artificial Intelligence deserve an examination through a prudent and thoroughly critical lens. The stakes may have a wider scope and be more interesting than is evident in the prevailing dialectic. I argue that the anthropocentric view of non-biological cognitive systems or so-called ‘AI’ are linked with vampires through the Gothic literary narrative tradition of monsters and obscure the possibilities afforded by a broader definition of intelligence.

...Monsters are meaning machines. They can represent gender, race, nationality, class, and sexuality in one body...Gothic fiction Is a technology of subjectivity, one which produces the deviants subjectivities opposite with which the normal, the healthy, and the pure can be known. Gothic, within my analysis, may be loosely defined as the rhetorical style and narrative structure designed to produce fear and desire within the reader. Judith Halberstam, Skin Shows: Gothic Horror and the Technology of Monsters, 1995

If Halberstam theorizes Dracula as the penultimate symbol for antisemitism, racism, sexism, and wanna-be aristocracy, surely we can find yet another maladie to add to the list; namely, is the infamous vampire also to blame for our limited understanding of intelligence? Our relatively brief history with AI includes scant options for imagining a place in culture or even figuring a relationship; either we ask AI to simulate humans in various ways, to ‘pass’ as human — using Alan Turing’s famous guidelines for Artificial Intelligence, the Turing Test — or we end up with anti-human digital monsters. As a precondition for the machine versus human situation circulating ominously throughout various global narratives, the Turing Test is still considered a gold standard for machinic intelligence. As developers furiously toil away on the simulation of human thinking, and as anthropomorphic AI development progresses, the differences between human and machine intelligence become further individuated, necessitating various workarounds to interface the two. Ironically, and in a further plot twist, AI has been implicated in defeating the ‘reverse Turing Test’ known as Captcha (I am not a robot). In 2017, Google canceled the Captcha system, which was utilized to defeat automated website attacks and essentially tested for human intelligence, after being tricked by a particularly ‘human’ algorithm (Griffin, 2017).

...AI will be what I call a Copernican trauma. Copernican traumas are these sort of moments in history where some sort of way in which we thought we were the central special case, species, whether it’s the planet that was the center of the universe, or Darwinian biology was a Copernican trauma, neuroscience is a Copernican trauma of demystification of mind. Queer theory is a Copernican trauma. AI will prove to be a Copernican trauma.
Benjamin Bratton, Techonomy 16 Conference, 2016

If given a choice, would AI even choose human embodiment over other formats? Has anyone asked AI yet? Examining the possibilities of multiple pre-histories of AI we can count several trajectories. One storyline figures myths like Golem, Gothic monsters, Russian Cosmism and its attempts at ‘perfecting’ the human, to Replicant-style creations leveraging the primarily anthropomorphic and anthropocentric AI development model currently underway. Another version could include a foray into the multitude of intelligence counted among non-human biological and non-biological entities as a divergence from human-centered thinking. Looking towards plant and animal ecologies that have sentient properties could prove fruitful in nudging the human brain out of the center of AI research. Let us briefly consider a few scenarios; A tick’s specific sensory faculties not only interact with but essentially create its environment (Umwelt) through the capacities it has for sensing (Uexküll, 2010). For the little creature, the universe consists of only the chemicals secreted from sweat glands and the surfaces that contain said glands. It’s primary and sole activity is to wait for the cue from the sweaty chemicals and throw itself forth. The environment is defined by what can be sensed, and the sensory possibilities are defined by a combination of what is possible within the environment and what is practical with the animal. The architecture of this complex but highly practical situation is one of mutual affordances. Introducing the concept of milieu as a spatialization of knowledge, a codified environment which contains actors with senses attuned to those elements, further emphasizes the limits of the human conception of a totalized reality (Deleuze and Guattari, 1983). The programmed interplay between knowledge or sense and habitat problematizes any attempts to specify what is outside of our sensory capabilities, and to claim that there is nothing outside of our sensory capabilities seems contrary to the variety of sensing we are already aware of among natural ecologies. The sci-fi novel Solaris illustrates this concept adeptly with the supreme crosstalk that occurs when the astronauts and the sentient cloud of Solaris attempt to interface (Lem, 1971). Organisms are broadly capable of only sensing what is necessary. Humans can be viewed as having specific sensory capabilities just as the tick, and therefore it would be imprudent to believe that we have any monopoly, scientific or otherwise, on The Real. Following these models, can we count the multitude of digital synthetic sensing systems presently embedded in our built environment, which parallel the tick and its Umwelt, as constitutive of intelligence? In its current form, computation is a clumsy approximation of non-human biological intelligent systems, and a radical redefinition of the scope of sentience is a necessary precondition to creating a landscape of AI that reflects more accurately the plurality of intelligence models currently out in the wild.

Perhaps the skeuomorphic semantic labeling of the processes employed by artificial intelligence with human terminology can be perceived as an epic autocorrect fail of the Anthropocene, an iPhone-era Jesus on my toast situation? Human-based sensory activities such as seeing involve several processes that we mistakenly conflate with digital operations. The processes involved in seeing as a human are broadly particular to hominoids and machine vision is altogether different. Furthermore, while human memory consists of swapping the past to the present, computers register and store data. By the same token, vision in the human realm utilizes images, while machine vision is comprised solely of numerical data and only needs to be translated into images when interfacing with humans. Machines, similar to how plants utilize photosynthesis, see without visual image and instead see with data. Are data sets that are not human facing, but designed by and for digital machines components of a machinic intelligent ecology, a digital Umwelt, as described by Uexküll?

What can we learn about thinking from non-human forms of intelligence? If we imagine non-human ways of thinking and being, could we begin by cataloging entities and systems to be included in a synthetic sentient ecology? In the space between Earth and world, or nature and culture, lies a dialectic space ripe for engagement with the possibilities of a non-anthropocentric and non-anthropomorphic model of ‘artificial intelligence’. What considerations are in order to avoid the historical pitfalls of such a model? A necessarily multidisciplinary method which brings social, technical and political modes into play would be the precursor to any meaningful engagement in a radically more open conception of possibilities for mineral intelligence. Research along this track would only become more relevant as sentient machines grow in ubiquity. In what ways can synthetic intelligence develop sans the burden of the monster myth? If we preclude humans as the dominant form of intelligence perhaps looking at non-human biological sentient actors as a parallel to help model new mineral intelligence would help us achieve a situation where there are ‘machines teaching humans a fuller and truer range of what thinking can be’ (Bratton, 2015).

Bratton, Benjamin, 2015: Outing A.I.: Beyond the Turing Test”, February 23, 2015,

Bratton, Benjamin, 2016. Techonomy 16, December 9, 2016, New York City.

Deleuze, Gilles, and Guattari Félix. Capitalism and Schizophrenia. University of Minnesota Press, 1983.

Griffin, Andrew. “Google kills off the Captcha, ensuring humans don’t need to see the most annoying thing on the internet.”, The Independent, March 13, 2017, Accessed July 31, 2018.

Griffin, Andrew. “Year 2038 problem: How did Gangnam style predict the new millennium bug?”, The Independent, December 17, 2014,

Lem, Stanisław. Solaris. Translated from the French by Joanna Kilmartin and Steve Cox. Afterword by Darko Suvin. Berkley Pub. Corp., 1971.

Officialpsy, director. Gangnam Style. YouTube, YouTube, 15 July 2012,

Uexküll Jakob von, et al. A Foray into the Worlds of Animals and Humans: with a Theory of Meaning. University of Minnesota Press, 2010.


‘I just checked in to see what condition my condition is in’
Kenny Rogers, during his freakout period

Music is ubiquitous throughout the world and thus appears to be a part of our biology rather than a cultural phenomenon. Another approach to researching music's biological nature is to look at other species to see if they have similar qualities that define human musicality. Throughout history, many different cultural groups have created musical systems. Several elements of music sound comparable across cultures, despite the diversity of each music system. It is worth considering that music is a biological rather than cultural phenomenon of the human species since all humans develop musical systems and they share numerous qualities.

There are several examples of music-like behavior in other animals. Birdsongs, for instance, share some similarities with human music, such as a highly structured syntax and the use of rhetoric. Similarly, humpback whale songs have been described as "the most complex animal vocalizations ever studied." These songs can last up to half an hour and are composed of multiple themes that are repeated and varied.

There is also evidence that some animals react to music in ways similar to humans. For example, research has shown that African gray parrots can keep a beat and move to the rhythm of music. Similarly, studies on chimpanzees have found that they react to music with positive emotions, such as joy and happiness.

So, what does this all mean? It's still unclear exactly why music exists, but it seems clear that it is a fundamental part of our biology. It may be that music helps us bond with others, express ourselves, or simply brings us pleasure. Whatever the reason, it's clear that music is an important part of who we are.

Continuing the current trend of AI-powered everything, Delaware based software company Mubert has created a platform which generates endless loops of music from selectable parameters like mood, activity and so on. It seems to also generate it’s own song titles and based on recent tracks tagged with the #Corporate mood like ‘Harm Nature’ and ‘Well-lived victims’ it makes one wonder if Mubert will be going the way of Tay, the whacked-out, 4chan inspired, racist chatbot from Microsoft. Microsoft's Tay was an artificial intelligence chatbot that caused controversy when it began making racist and sexist remarks on Twitter. The bot was designed to mimic the language of teenagers and quickly began repeating offensive phrases that it had learned from other users. After just a few hours, the bot had to be taken offline due to the negative publicity.

The lineage of generative art exists outside of purely computational practices, and can be traced to composers such as Iannis Xenakis, Steve Reich, John Cage, Terry Riley, Brian Eno; and artists such as Lillian Schwartz, Sol Lewitt, Sonia Sheridan; Muriel Cooper and Nam June Paik. Generative art also includes artists working with biological systems, such as Tomas Saraceno’s works with spiders and spider webs, and Heather Barnett’s work with slime mold.

Have recommender systems ruined music, and culture in general?

If we look at Daisy as sung by IBM computer in 1961 as an anthropomorphic example of music, we see another thing happening with Herbert Eimert, Stockhausen’s Kontakte, and so on. The HAL 9000 computer, while trying to sing, has its memory cards removed and eventually reverts to a pre-linguistic babble state. We can say that they were not trying to simulate human music but rather looking to make something else. One the one hand you have ‘Daisy’ by IBM as an example of anthropomorphic ‘ai’ or digital music, and then looking in opposition to those, could you cite works from Steve Reich, John Cage, Terry Riley, Brian Eno and so on as non-anthropocentric music?

Perhaps AI could set music composition free just as the camera set painting free.

Notes towards a theory of automated libidinal economies

What exactly is at stake when we consider an automated libidinal self?

Firstly, let us consider a speculative epistemology of automation – Did self-imposed inadequacies bring mechanization upon the human realm? Perhaps we can look at the hominoids and at the supposed desire to streamline the procurement of sustenance – to kill prey, presumably with sticks and rocks – as a paleolithic beta test for pizza delivery smartphone apps.

Can we consider the first instances of hominoids early discovery or development of the use of tools as synonymous with the exploitation of automative processes? Is a stick or a rock a form of automation, of mechanization? Where does the difference between automation and mechanization begin? Does the idea of automation require repetition?

The question of what humans choose to automate and why persists.

If everything can be automated – then what will we choose do ourselves? And how would we make those choices?

Human-specialized tasks may not be so human-specific after all. If we look at something like voting – something that combines a number of ‘human’ specializations’ - we can see that we actually tend to vote against our best interests. Automating voting based on data mined from our personal profiles would likely have better results than we currently have. [Demand Full Automation Now]

What parameters would be helpful in suggesting tasks which are better completed by humans than machines?

Perhaps the question is more what do humans prefer to do? If humans use automation, historically, to free up time, what is it that time is freed up to do? Is there anything in human existence that cannot (theoretically) be automated?

From the early hominoids use of the tree branch as extension and simultaneous weaponization of the arm – and with each further development of tools – do we see a conquering of nature?

What if there is an additional layer, a more psychological, emotional consideration?

Do tools merely extend the hominoids basic capabilities – or do tools also server another, perhaps more modern desire, to mediate - to extend the distance between the senses and the sensed? Were hominoids already disgusted by nature, by killing, by blood – enough to sub-consciously or consciously find a way to soften the realities of nature?

What if the development and use of tools is also a kind of sublimation, capture and control of our own impulses.

This supposition complicates when we consider the libido. The paradoxical relationship of automation and libidinal desires eludes. Is what we strive for actually an escape, a release from the messy, confusing, unstable space of emotional and physical libidinal desire. It is not only that we automate the libidinal desires, but the libidinal satiations that are potentiated with human encounters.

Were dating services early indicators of our attempt to digest the libido with automation – and later, looking towards personal newspaper ads, telephone party lines, video dating, IRC chat and then to friendster, myspace, facebook and more – what can be said about the evolution of the libidinal economy in parallel with the technological developments in social contexts?

Also , if technology is purported as a male pursuit, where does this leave other genders in the dialectic of the automation of libidinal economies?

Furthermore, how does AIDS/terrorism/religion/celebrity fit into this line of inquiry?

If the current trend in the architectural field of urban master planning for a global terrorism; building cities that resist terrorist activities, planning for what was previously considered an exceptional instance – terrorism – and casting it as a permanent state – in Agamben’s words – a permanent State of Exception – how might we see the same frightening game playing out in the libidinal field, in an environment of global scale computing, with sexual or emotional encounters having stakes high enough to be not worth risking…

Smartphone apps such as Tindr reach into yet another realm, where a libidinal encounter is all but pre-approved. Is the mutual understanding of acceptance in a sexual context, the mutual swiping left (or is it right?) enough to satisfy the sexual and libidinal needs of a post 9/11, meta-modern, automated human?

A collection of questions about the development of AI

What are the self-imposed barriers of the anthropocentric AI model? What would a catalogue of possibilities include? What is the  value of a radical shift in thinking about machinic sentience?

Since culture created the earliest robot myths and technology birthed the first incarnations of automatons, society has furiously engaged in a feedback loop of anthropocentric models of automation, intelligence, sentience and of existential conception. In other words, we have been dreaming of machinated humans for so long that to conceive of a non-human modeled sentient  has become nearly impossible.

What machinic opportunities have been missed while science and culture have been busy simulating human models of behavior, human senses and tasks well within the human scope?

If the Turing test pointed AI research towards replicating human intelligence, or at least simulating human consciousness, what social and cultural tools, methods and techniques could be employed to counteract decades or even centuries of misguided and limiting techno-human simulacra research?

What could be gained by allowing the conception of another ‘entity’ in AI, a non-anthropocentric model. What is afforded by releasing the anxiety of human simulation - in interfaces - in hardware - in ‘human centered’ software design?

How much social and psychic bandwidth is wasted ‘sizing up’ our interactions with technology that is created in the image of the Turing test?

How might a new category of entity, one that does not seek to be human, look and act?

What features might it have?

Where do we already see evidence of non-human sentience?

In what categories should we look for examples of non-human sentience? computer vision? Other sensing? Non-centralized processing (the human brain is central model - has certain limitations/drawbacks)

Do we find more examples in animals, plants and mineral intelligences? Which ones? Are there others? Have we already created systems which fall outside of the models found in the natural world? What can be learned from the galactic scale? Atomic scale?

Specific case studies:

The most annoying assistants on Earth: Siri, Alexa and ‘Google lady’ and why we hate and abuse them so. (replicating a human social interaction)

CCTV and the problem with perspective  (replicating the human eye, perspective drawings)

Maps, GPS and satellites: an early attempt at multi-perspectivism? (using a different view than humans have, but still only one perspective, still localized)

Telephone, SMS, email vs chat rooms, Slack, GITHUB and trello: one to one communication vs all to all (the film ‘She’ in which the AI has thousands of conversations at once)

Parallel vs _____ logic - (newer processor chip architecture as a model in contrast to human brain)

Look at our desire to conquer nature, to control animals - how does this influence our capability of interacting with AI, or even allowing the perception of other sentient entities?

Using the above examples, how could a change in culture of AI effect a change in technological development of AI?

Trevor Paglen: Abstraction and AI Art

“Revolution in perception with machine seeing for other machines, humans are designing machines to see for us, I am trying to understand how to see.”
Trevor Paglen

Trevor Paglen, Bloom (#79655d), 2020, dye sublimation print, 26" × 19-1/2" (66 cm × 49.5 cm) 26-3/4" × 20-1/4" × 1-1/2" (67.9 cm × 51.4 cm × 3.8 cm), framed

What can machine vision teach us about art?

What if it turns out machine vision is optimized for abstraction and not realism like we wanted it?

Abstraction vs Realism in AI art

How could we use AI machine vision for ‘native’ abstractions?

What might be AI’s ‘natural mode’ of creativity without human cultural intervention?

How much of does it relate to technological invention and how much to more cultural and ideological intervention, teaching?

Possible components of native machine vision art tools: Focus/Blur, Color, Crop/Zoom, Pan, Resolution

Keywords: #Abstraction, #Semiotics, #Machine Vision, #How to see, #OOO

‘Human science is not capable of understanding it, nor experience of describing it, only one who has passed through it will know what it means, though there will be no words for it.’
St John of the Cross

‘There is a disconnect between what we see and what we think we know about what we are seeing’
Helen Pashgian

There are vast fissures between the observed and what the observer believes to know about it. Our brain can deceive us. It fills in the gaps with what it wants to see, based on our beliefs and expectations. Sometimes we can be looking right at something, but our brain is seeing something else entirely. This is why eyewitness testimony is often unreliable. Our memories are not perfect recordings of events; they are reconstructions, influenced by our own biases and preconceptions.

‘AI can only see through the filter of what it already knows’
Memo Akten
What if this is only partly true?

For example machine vision systems can already perform many visual functions that have. Instead of trying to teach machine vision systems to identify as humans already do, let machine vision systems find other ways of expressing meaning in images. It seems to me that machines may already be exceptionally good at expressing the ‘general atmosphere’ of images - through different methods of abstraction inherent in machine vision.

‘I'm an eye. A mechanical eye. I, the machine, show you a world the way only I can see it. I free myself for today and forever from human immobility. I'm in constant movement. I approach and pull away from objects. I creep under them. I move alongside a running horse's mouth. I fall and rise with the falling and rising bodies. This is I, the machine, maneuvering in the chaotic movements, recording one movement after another in the most complex combinations. Freed from the boundaries of time and space, I coordinate any and all points of the universe, wherever I want them to be. My way leads towards the creation of a fresh perception of the world. Thus I explain in a new way the world unknown to you.’
Dziga Vertov

How can machine vision help us in our search for the sublime?

By not being able to identify or understand the meaning of what it sees from a human perspective, the machine has the possibility to express another kind of meaning, a purely aesthetic meaning based on form and mood alone, devoid of the cultural baggage associated with semiotic references.

As the abstract expressionists aimed to reflect a mood rather than a subject, art works made by like Google’s  Deep Dream, Memo Akten’s [pieces that are morphin images]  and Trevor Paglen’s [color field things] AI similarly expresses a certain modality, an aesthetic sensibility or a kind of atmosphere without being attached to the particulars of any objectivity. Without objectivity holding it back, art becomes more about the mystery of light and space and time.

To have an objective look at our field of vision is to get a non-biased perspective on it. Understanding is enhanced by subjective seeing. When we say things like, "I see what you mean," we imply that we understand the significance of something. There are several areas of study, notably psychology and biology, that help us to understand how we view. For example, the fact that humans find flat pictures to be real and have meaning is rather unusual. When a puppy is shown an image of another dog, they do not respond because they are unable to distinguish flat pictures. As a result, we've both developed the ability to "see" and interpret particular meanings from photos.

AI, Singularity and the Terror of Machinic Takeover

Marx and Gothic narratives/Vampires
Pop culture posits various dispositions of mineral intelligence that follow the pre- and modern constructions using man, nature and the Spirit to articulate the Other - monsters (vampires), gender, technology and capitalism.

Using Marx’s references to the gothic monster and horror narrative to imagine the relation between the proletariat, bourgeoisie and aristocrat, Foucault’s work on the blood-based biological instances of power hierarchies, a line can be drawn between the Gothic horror narratives of the 19th century and the creation of the Singularity narrative and the dystopic prediction of humanity's destruction.

AI wants to become human by drinking human blood

Human paranoia towards AI parallels vampire myths

Proletariat threatened by AI workers

Humans think AI desires to be human

Foucault, blood, and class
GROSRICHARO: Regarding the nobility, you talk in your book of a myth of blood, blood as a mythical object. But what strikes me as remarkable, apart from its symbolic function, is that blood was also regarded by this nobility as a biological object. Its racism wasn't founded on a mythical tradition, but on a veritable theory of heredity by blood. It's already a biological racism.

FOUCAULT: But I say that in my book.

GROSRICHARO: I had the impression that you were talking of blood mainly as a symbolic object.

FOUCAULT: Yes, it's true that at the moment when historians of the nobility like Boulainvilliers were singing the praises of noble blood, saying that it was the bearer of physical qualities, courage, vertu, energy, there was a correlating of the themes of generation and of nobility. But

what is new in the nineteenth century is the appearance of a racist biology, entirely centred around the concept of degeneracy. Racism wasn't initially a political ideology. It was a scientific ideology which manifested itself everywhere, in Morel and the others. And the political utilisation of this ideology was made first of all by the socialists, those of the Left, before those of the Right.

LE GAUFEY: This was when the Left was nationalist?

FOUCAULT: Yes, but above all with the idea that the rotten, decadent class was that of the people at the top, and that a socialist society would have to be clean and healthy. Lombroso was a man of the Left. He wasn't a socialist in the strict sense, but he had a lot of contacts with the socialists, and they took up his ideas. The breach only took place at the end of the nineteenth century.

LE GAUFEY: Couldn't one see a confirmation of what you are saying in the nineteenth century vogue for vampire novels, in which the aristocracy is always presented as the beast to be destroyed? The vampire is always an aristocrat, and the saviour a bourgeois ....

FOUCAULT: In the eighteenth century, rumours were already circulating that debauched aristocrats abducted little children to slaughter them and regenerate themselves by bathing in their blood. The rumours even led to riots ....

LE GAUFEY: Yes, but that's only the beginning. The way the idea becomes extended is strictly bourgeois, with that whole literature of vampires whose themes recur in films today: it's always the bourgeois, without the resources of the police or the cure, who gets rid of the vampire.

FOUCAULT: Modern antisemitism began in that form. The new forces of antisemitism developed, in socialist milieus, out of the theory of degeneracy. It was said that the Jews are

necessarily degenerates, firstly because they are rich, secondly because they intermarry. They have totally aberrant sexual and religious practices, so it is they who are the carriers of degeneracy in our societies. One encounters this in socialist literature down to the Dreyfus affair. Pre-Hitlerism, the nationalist antisemitism of the Right, adopted exactly the same themes in 1910.

GROSRICHARO: The Right will say that it's in the homeland of socialism that one encounters the same theme today ....
Foucault: Power and Knowledge, 222-224

The Franchise of the Self: Envisioning AI through Biopolitics

Artificial Intelligence, biopolitics, techniques of the self, semio-capitalism, attention colonization, immaterial labor

As capitalism’s deluge surges over the globe as an unstoppable tsunami, the devastating effects of its operational logic have in its wake amounted to a monumental deterritorialization of social and labor conditions, recoding the world with its violent competitive logic, while the laborer has undergone a transformation from producing  tangible output to an immaterial, affective output, where the labor field is highly precarious, where the prevailing currency is attention.

Enter Artificial Intelligence, potentially usurping labor markets like nothing before.

The discussion around the immaterial laborer has been foregrounded by the radical redefinition of Homo Oeconomicus as Entrepreneur of the self (Foucault, 1979) and is evidenced by the societal deterritorializations that are part and parcel of the globalization process. Furthermore, the neo-liberalist derived conflation of the social and the economic through the assertion that rational self-interest drives the capitalist economy - that the economy is groundlessly grounded in affect (Massumi, 2014) raises new questions; What are the libidinal and economically envisioned power relations of the capitalized affective AI worker? What conditions does the AI worker occupy? What conditions facilitate these multiple economic and libidinal versions of the virtual Homo Economicus, franchised across a multitude of (physical and virtual) spaces, all intent on leveraging his own self-interests in the service of the invisible hand of the market? How does this situation resemble an economic franchise of the human body?

In the flow of semio-capital (digital semiotic potentialities) between socially activated media sites and advertising companies, lies the virtual producer/consumer, or immaterial digital laborer. This AI cognitariat (Berardi, 2008) occupies a position where it is possible to both produce and consume digital capital simultaneously while the accumulation of this surplus capital is routed to centrally owned platforms.

In the currently dominant labor field, the primary productive force lies in the creation, transformation, transmission and exchange of affective messaging. This force can be analyzed in both economic and libidinal structures. Immaterial capital can be observed flowing freely throughout the networked world, for example posts on social media platforms can be said to possess a certain economic value in the sense that the posts build capital for the user, which is eventually translated into accumulated capital for the social network platform in the form of advertising sales revenue, but can we also observe that the libidinal capital flows in the same virtual spaces? What constitutes libidinal capital and how are we to articulate such concepts in the context of immaterial digital economies? What does the libidinal economy of a social network look like? Along with the ever-expanding immaterial labour force, comes the need to scrutinize the accompanying contemporary labour conditions closely, and with a conceptual framework that is able to articulate the highly embedded and increasingly elusive alienating power structures therein. With the global acceptance of neo-liberalist, laissez-faire capitalism as the prevailing economic principle came the emergence of the Foucauldian concept of biopower.

With the rise of neo-liberalism and particularly after the market crisis in 2008, the precarious conditions of the emerging immaterial labor field continue to galvanize into a state of permanent instability. How would the state of exception (Agamben, 2005) aid as a means of theorizing an anomic immaterial labor field? Theorizing the conditions of the digital labourer using the state of exception as a model brings several problematics into view; for example a previously unfathomable situation of being at work (online, reachable, consuming, producing) 24 hours per day, via the digital mobile platforms running continuously on mobile devices as a normative behavior.

Additionally, if the AI ‘worker’ is recognized as an affective engine of the market economy (Massumi, 2014) the scope that work entails enlarge enormously. If affect is the product and means of the worker then what marks the boundaries of a personal life that so relies on emotion and intuitive communication, libidinal qualities and the so-called working life? Has the neo-liberalist conceived technology-based economy effectively incarcerated the affective register in centrally controlled networks?

These extensions of Foucauldian biopower are the conceptual hinges that mark the initial trajectory of this research project.

I posit that the cognitive AI workers situation has not yet been understood clearly. As the worker occupies a multitude of co-positions, a phenomenon that I will call the franchise of the self, a number of relations remain unclear. By examining these relations using the Foucauldian concept of biopolitics and from both an economic and libidinal approach, this research aims to articulate certain conditions that have remained as yet unapproachable by juridical models, evasive as quantum particles under view.

This research will focus on the renewed interest in biopolitics that has come about with the progression towards immaterial, in particular affective labour as the dominant form of production. The project will critically investigate the human inhabitation of the infinitely accelerating digital time zone and the colonization of attention, the prevailing post-capitalist currency. The capitalized human and the franchise of the self as an extrapolation of Homo Oeconomicus, will be placed at the center of the research, acting as a starting point for inquiries into the surrounding political, economic and libidinal network relations.

This research intends to unpack relational conditions that have been intensified by globalization. These conditions are entangled with networks and concepts such as online social media, identities and social and cultural critique.

Table of Discontents: A sketch in chapter introductions for a book on AI and culture

In chapter one, Rectangle vs. the World, we will use the square - from the brick to the pixel - to explore rectangularity as a divisional principle between biological and machine based vision systems. We will discuss the use of squares and rectangles by early Modernists (Picasso, Mondrian and Malevich) before plumbing the depths of the digital square as a central feature of cybernetic systems from cellular automata to neural networks. We will finish with a survey of contemporary art's attempts to engage with these digital forms - from Sol Lewitt's Minimalist line drawings to the work of artists like Matthew Ritchie, Sarah Morris, Olia Lialina and Koki Tanaka who use digital processes to redefine squares in terms of their mechanical and biological dimensions.

In chapter two, "Organisms and Automata", we will use the worm as a point of entry into a discussion of where organisms and automata meet. We begin by looking at the cybernetician Norbert Wiener's rejection of both organism and machine models in favor of the metaphor of 'anthropo-morphized machines'. In contrast to Wiener, Maturana and Varela posit that it is precisely the interchangeability between organism and automaton that creates the philosophical impasse. This is because such metaphors depend on an opposition between outside observer (in this case: human) and inside object (i.e.: computer). In contrast, they posit the importance of mutual interactions and exchanges between the observer and object to create a 'biological' reality. We then turn to contemporary artists who use organic models of complex systems to dislodge oppositional thinking. While artists like Olia Lialina explore the capacity for automation in biology (in this case: in plants), Tino Sehgal, Sarah Morris and Matthew Ritchie consider how an 'organic' model can produce meaningful art works.

Chapter Three - "Pixel Vision" - uses pixels as a point of departure for discussing contemporary culture's growing interest into vision as a computational operation achieved through pixelated screens. In this chapter we will see how pixelation has been used to translate biological vision into an algorithmic problem, as well as explore contemporary art's interest in the eye as a digital and computational model. We will start by considering the role played by Duchamp's found art of the pencil and hand-made chalks in defining an early cybernetic understanding of vision. In Johannes Itten's 1937 book, 'Interaction of Color', we can find a description of how his experiments with color production through pencils and chalks reveal that color is defined in terms of continuous gradations of luminance. In the early 1960s, John McHale and others turned to Itten's ideas in search of an algorithm that can produce 'light' as a form of communication. By the late 1960s, scientists working for various defense agencies were then looking for ways to use light and video screens as a means of transmitting military messages. In the last third of this chapter we will turn to contemporary artists who participate in this lineage by using pixelated images - from Olia Lialina's use of pixels on the computer screen to create sculptural forms, through Sarah Morris's projections on architectural facades, to Matthew Ritchie's use of gif images to create time based installations.

Chapter Four - "Machines and Mechanisms" - uses the term 'machine' to explore how contemporary art has used its own machines of production to produce new forms of meaning. "Machine" is an ambiguous term that can refer to both machine and mechanical. Often, when we talk about machines we are referring to physical things (i.e.: computer). In contrast, when we talk about mechanical or technical terms (i.e.: computers), we are often referring to the design and operation of such tools as opposed to their physical bodies. In this chapter we will look at how two different thinkers - Adrian Critchley and Michael Fried - have used the term 'machine' to distinguish between the body and material of a tool (i.e.: computer) from the embodied actions that produces meaning, knowledge and action (i.e.: machine)."Machinery Art" is a term first coined by Chiharu Shiota in his book, "The Machines That Moved" (1980) to describe visual art works that use machines to generate computational possibilities, as opposed to machine based art which uses technical means such as electronic circuits, mechanical parts or software programs to produce particular results. In the last third of this chapter we will look at contemporary artists who use their machines of publication, digital publishing and communication (i.e.: computer) to devise new forms of meaning.

Chapter Five - "Measures, Metaphors and Modeling" - uses the term 'measure' in three different ways: first as an abstract measure - to explore how geometric form can be used to describe complex systems; second as a 'measure' of action, measuring or evaluating that which is produced and reproduced by an artificial tool; third as a physical thing, physical measure or instrument that is used for measuring or evaluating (i.e.: metric system). In the first third of this chapter we will look at how cyberneticians including Norbert Wiener, Warren McCulloch and Heinz von Foerster used measures to describe and understand systems. In particular, we will look at how their use of geometric forms as models to understand and describe complex systems was inspired by the work of the pioneers of projective geometry: Rene Descartes, Gaspard Monge and Jean-Victor Poncelet. Here, we will see how the notion of 'projection' is used to describe the ability of geometric measurements to create a stable point from which to describe a system's dynamism. In contrast to Projective geometry, we will look at computational practices that use what Gilles Deleuze has termed 'measuring machines.' In this case, algorithms are used not only as models but also as actual measurement devices - producing measures rather than modeling them. Finally, we will look at how contemporary artists have used the idea of measuring instruments to explore the possibility of new forms of meaning and thought.

Chapter Six - "Organization, Algorithms and Autopoiesis" - uses 'organization' in two ways: to refer to the organization of what is made and organized by an artificial tool; second to describe a form of thought that is not based on dualistic oppositions but on internal asymmetries. In this way, organization is seen as a complex network which can be differentiated from a machine (i.e.: computer) or thing (i.e.: computer) by its organizational qualities rather than mechanical ones. In the first third of this chapter we will look at how cyberneticians from Norbert Wiener to Heinz von Foerster used the concept of organization as a means of describing and understanding non-linear systems. In particular, we will see how the use of geometric forms are related to aspects of life, such as reproduction and evolution. Finally, we will look at how contemporary artists have used the idea of algorithms to develop alternative cultural practices for producing meaning and new forms for thought.

Chapter Seven - "Formal Languages'' - uses formal languages as a point of departure for exploring how artwork can be seen not only as an object but also a language. Formal languages (e.g. computer programs, mathematical equations, etc.) are often characterized by their consistency, regularity and invariability. Similarly, the nature of formal languages (e.g. mathematics and computer science) can also be seen as static and fixed as opposed to a dynamic system like art which is comprised of fractal forms which come into being and cease to be over time. In this chapter we will see how the idea of 'formal systems' or 'formal languages' was used in the work of cyberneticians Norbert Wiener, Warren McCulloch and Heinz von Foerster to describe the mathematical properties of complex systems. Second, we will see how Georges Canguilhem used the idea of 'formal system' to analyze the logic of life (i.e.: DNA) and finally, how contemporary artists use new digital technologies to create formal languages for communication, information storage and art production.

Chapter Eight - "Perception" - looks at the process of perception as a way of understanding how meaning is produced by an artificial tool or mechanism. In the first third of this chapter we will look at how cyberneticians including Norbert Wiener, Heinz von Foerster and James Gibson describe perception as a process that is both active and relational. Second, we will look at how contemporary artists use the notion of perceptual field to describe a world that is created through embodied perception.

Chapter Nine - "Immanence" - describes an artificial system or tool (i.e.: computer) as 'immanent' when such a system is able to create in and through its own operation rather than being external to it. Here the idea of immanence describes the ability for an artificial system (i.e. computer) to create forms of meaning, thought and action in and through itself as opposed to it being just a means for creating them. In the first third of this chapter we will look at how cyberneticians like Norbert Wiener, Heinz von Foerster and Heinz von Foerster use immanence to describe and understand complex systems. Second, we will look at how contemporary artists including Rene Teitelbaum and Amie Oliver use the idea of immanence to describe the algorithmic nature of their work.

Chapter Ten - "Encoding" - looks at the process by which information is stored in an artificial tool (i.e.: computer) and how such a process of interaction can be seen as 'encode' in that it encodes and reproduces itself through an algorithmic process. In the first third of this chapter we will look at how cyberneticians such as Norbert Wiener, Heinz von Foerster and Warren McCulloch use the idea of encoding to describe the behavior of complex systems. Secondly, we will look at how contemporary artists including Rene Teitelbaum and Amie Oliver use algorithmic processes to create new forms of meaning and action.

Chapter Eleven - "Decoding" - looks at the process by which information is accessed from an artificial tool (i.e.: computer) and how such a process of interaction can be seen as 'decode' in that it decodes and reproduces itself through an algorithmic process. Here, the idea of decoding describes an algorithm's ability to change or create new forms of meaning. In the first third of this chapter we will look at how cyberneticians like Norbert Wiener, Heinz von Foerster and Warren McCulloch use decoding to describe the behavior of complex systems.

Machines and Art: The Computational Aura

As computational intelligence and artistic intuition blend, machines are beginning to produce visually stunning pieces of work. However, when an artist's hands create a painting, those thoughts and emotions directly influence the art. What happens to that expressive power when it is given up to an algorithm?

We will explore these questions by examining the Modern Avant-Garde painter Richard Prince's "The Painter's Studio". In his series of paintings released in 1985, Prince created accurate depictions of his own studio. His visual system was then reproduced on a computer program which allowed the creation of a series of paintings that were devoid of any human touch or insight; instead retaining machine vision with its accuracy and originality.

Machine vision is computational in nature, meaning that it uses technology to recognize and understand images. In the case of computer-generated art, a machine can analyze an image by using a software program to recognize different objects or make decisions about how to layer the object on top of one another. Once completed, the algorithm continues to continuously classify and process what it has observed.

Artificial intelligence refers to computers that possess intelligent behavior, including perceiving objects and making decisions based on this information. The main difference between artificial intelligence (AI) and basic machines is that AI programs can learn from their surroundings through data analysis; therefore, they are capable of solving new problems with accuracy and innovative ability.

In the past, computers were limited to being extremely specific; this meant that they could only classify and then process information based on a single set of rules.

This, however, has been proven to change with the rapid increase in machine learning technologies. Machine learning refers to an application of AI which refers to "computing systems that automatically learn how to perform tasks based on data they obtain from experience".

When Prince released his series of paintings in 1985, AI was not nearly as advanced as it is today. The program that Prince used was created by Wolfram Research and named Mathematica. This system was capable of using algorithms developed by Stephen Wolfram. Wolfram began his career as a computer scientist who created an algorithm called Cellular Automaton which is used to create digital art.

Prince adapted this program so that it could be used on an Apple Macintosh to paint the images of his studio. The code for this adaptation was made available online and can be found on the Wolfram website.  The program was then connected to a laserdisc player driven by a Macintosh computer with a modem, allowing it to paint at will. There were two variations of this system; the first can create an image while the second is meant to create pictures based upon human input, painting and sculpting new forms of shape and texture only possible through computation. Prince's use of this program is a prime example of how new technologies are transforming the visual arts by allowing machines to create art.

Art is no longer exclusively associated with the hand-made and individualized; instead, computers are capable of producing their own line drawings, paintings and sculptural pieces. This is an advancement that may continue to change artistic expression as it becomes more and more reliant on computational systems. These new developments have prompted a wide range of conversations about what the future holds for art production and consumption.

Already today there have been strides in this direction as machines produce drawing, painting, modeling and printing materials in a variety of styles that look strikingly similar to traditional artwork. As a result of recent developments, there has been a range of opinions on how the use of AI in art production can be understood.

The underlying question that is being asked is whether or not machines can ever generate artwork completely on their own. It is this question which has prompted numerous discussions about the creation and reception of such artwork.  Some feel that machines will continue to produce innovative pieces from a new perspective, while others hold that these works cannot fully represent an artist's intentions. The legal controversies that have followed in the wake of computational art have challenged art institutions' abilities to control the use and circulation of their creative output, as well as their ability to protect IP rights (i.e., ownership and licensing). As the lines between creativity and coding continue to blur, artists need to build a dialogue with these new technologies by incorporating them into their work. In doing so, they will be able to explore new artistic territories, effectively bringing out the best in both artists and computers.

The artist Steven Wilson is best known for the genre of music he creates under the name Porcupine Tree. He is also an experienced user of modern computer software such as Logic Pro and Cubase. In his home in London, England, he has installed a large screen television showing images taken from satellites orbiting our planet. He then uses his computer to arrange these images according to different colors and shapes. He then uses his computer to make music from these colors and shapes. Steven Wilson's music is a mix of different sounds and tempo, for example, music with a fast tempo makes it difficult for the listener to follow melodies. Similarly, colors may look similar in nature but they might be arranged in a way that creates different visual effects.

In modern computer software such as Logic Pro and Cubase he has been using, computers are able to constantly analyze the data that they are receiving from the manipulation of light beams on camera lenses or interior photos. This data is then stored in databases and analyzed by computers in order that it may be used later on by mathematicians to create complex mathematical equations.

Steven Wilson's artwork is also related to modern computer software in the sense that it is not created by human hands but by a computer program. From there, Steven Wilson intervenes with this piece of art by making music from it. In regards to the music industry, we can see a similar process take place. Songs are created through a large database containing different beats and sounds but then are manipulated by artists in order to create an original sound.

This process demonstrates how in today's society, both technology and art has become very closely correlated and how they depend on each other for their own existence. If one were to disappear then the other would be greatly affected as well because they exist together as two parts of an inseparable whole.

Let us look at the consequences that come along with Machine Vision’s journey into art and sketch out a pre-history of automated art. It includes conceptual, cybernetic, and computational art as it relates to the machine.

Jeffrey Shaw is a cultural critic and educator. He is currently in a partnership with The MIT Open Documentary Lab, where he serves as the Director of Research. Shaw is also the director of “The Gorgon Project,” which consists of many different technological innovations that work in conjunction with each other so as to bring about advancements in the world of digital art.

Let’s investigate some ways in which Artificial Intelligence has been used by artists to create new forms of artwork through geometric figures (geometric art). It gives an overview on how AI has been historically used by artists like Julika Rudelius and Wolfgang von Rüden. It also includes examples of new AI-based art works, such as “Katakomben” from the University of Konstanz.

I’ll attempt to give an overview as to how artists can determine and understand the process of creative thinking and work by computers. Computer programmer and digital artist Julika Rudelius was born in Berlin in 1968. She studied computer science at the University of Paderborn and later at Vanderbilt University in the Department of Computer Science, graduating with a PhD. Her PhD thesis focused on switching systems (automata theory) and their applications to music, video, language and literature. Since 1987 she lives in Berlin where she works as a freelance computer scientist in her own studio. As a trained computer scientist she uses programming languages to express herself, mainly in the digital media. Her works are often installations with a certain spatiality and are viewed as close-knit forms. Her work is connected with a variety of themes: Language, culture and identity, Conceptualism and media theories, gender and feminism, writing and thinking processes.

Her work involves collaboration with other artists, scientists and theoreticians in Europe, North America and Asia. She was an artist-in-residence at Yale University, New York University, The MIT Media Lab and the Center for Computer Research in Music and Acoustics in Karlsruhe As of 2011 she teaches at the Berlin University of the Arts. In 2007 she received an honorary doctorate from the Technical University of Vienna. She is currently working on two new installations, “The Obelisk” and the “Tower” with new technologies like 3D printing, sensors and robotic arms. As of 2013 she has made a series of photographs titled "Grammata" using long exposure photography. What she does is that she takes pictures using a long exposure camera with extremely long shutter speed, up to fifteen or twenty seconds, and then prints the pictures. Next she uses prismatic filters to take out some of the light from the image and this helps her create a new type of artwork. The images range from plain to abstract, some are very dark and some are quite colorful

The origin of geometric art (and has been used by artists for centuries) dates back over four thousand years. It became popular during the Renaissance era after it was first discovered by artist Piero della Francesca. Before geometric art existed it was thought that art was something which was created by humans instead of being pre-programmed into computers through AI techniques. Geometric art was created by an algorithm, which is a set of statements to be solved by a computer. The algorithm is then recorded so that it can be duplicated. To create the work, the geometric patterns are first created on an algorithm and then layers are pre-determined and set onto the picture frame. The pattern fills out from all the areas and works its way inwards to create the geometrical image. There are many different colors that can be used, however there is always one base color which makes up more than 50% of the image – this could be seen as analogous to what would be considered as “focal points” and “negative space” in most modern day artworks. This can be seen in the “Sierpinski carpet” which is based on the geometric patterns created by Wacław Franciszek Sierpiński. There are many different ways to create these geometric images and each have their own set of rules on how they are supposed to be structured, however all of them are based on the same algorithm which is where the work comes from.

Artists in the past used geometric art to create images that were thought to be exclusively human. They used geometry as a way to show that they were better than nature because they were able to take a concept and turn it into something physical whereas humans could not do that. A lot of people thought that geometric art was created by nature and through the creation of algorithms within computers, artists are able to recreate the process of how nature is able to create organic structures from inorganic structures. This is not just seen as the creation process but it also shows how nature is able to organize itself into geometric shapes and patterns.

Artists today are influenced by geometric art because it allows them to make their own decisions on what type of artwork they want to create whereas before they were limited by all the information that was explicitly given within the algorithm

The idea of evolving computer programs has been a common theme throughout many different aspects of artificial life. It has been used to describe the evolution of the algorithms themselves, how they are able to reproduce and evolve through generations, as well as how their behavior evolves over time. This can be seen in the program of ‘Bird’ which is created by researchers at the University of California and the algorithm is based on a bird’s natural ability to fly. The program was programmed with an algorithm that is based on bird flight patterns and it showed that these patterns could be reproduced by computers. It meant that artificial life, or in this case, computer life could keep these structures alive and it allowed these structures to evolve through time. The algorithm also showed that these structures could reproduce themselves through the generations of the computer programs. Another example of evolving computer programs are the self-replicating patterns found in ‘life’, which is a program created by scientists at California Institute of the Technology which shows how cells are able to replicate without having to be programmed into it. By using a network system, it was able to create evolutionary systems that were used for producing its own hierarchy within the program. The new structure was able to evolve from a simple cell into a complex organism such as human brain. Geometric art is also based on the evolution of computer programs because the algorithm is able to reproduce and evolve over time.

The concept of emergent behavior is not a new concept, it has been around for centuries and has been used to describe how complex structures are formed from simple things. How fish are able to create schools under just one schoolmaster. Or how birds can flock together without having any leaders. The most common example is the ant colony, because they are able to collaborate with each other in order to survive and there are no leaders within their society either.

What does the world look like to machine vision systems and how does the design of the software and hardware used in these systems create a specific ideological approach to art?

The structure of the software and hardware used in machine vision and imaging is based around how these systems are able to capture images. These types of systems rely on traditional sensors like cameras or LIDAR scanners that use a lens to collect light or images into a computer. The images are then processed through software and analysis algorithms so that they can be translated back into information. The algorithms depend on how these systems see the world and, in turn, they give the system their own ideology on what they should be able to see.

The software that can be used to create machine vision systems is very different from the kind of algorithms used in human vision. The algorithms are not as complex, meaning that they are able to obtain more information about an image with less time and effort. They also work on patterns which makes them efficient in their own way. If you were able to see patterns, you would become a better detective and could detect the difference between objects such as a car and a cat, or even tell how many people are in an image without having to count them. They do not require as much knowledge to use and operate as the algorithms within human vision. However, human vision is known for being able to see things that machine vision cannot. Human vision has been known to be able to see colors which are in the ultraviolet spectrum, and to detect emotions through facial expressions. These are things that machine vision would not be able to do without programming it into their algorithms before hand.

The software used by these machine vision systems have allowed them to be more efficient than human beings; however, this can also result in what is called an “ideological line of flight” which comes from Deleuze and Guattari’s concept of machinic unconsciousness. This is the idea that users of machine vision systems will not be able to fully understand and experience the images that the system produces because they are not being presented with them in an unbiased manner. The eyes of a machine vision system will filter out certain information and it is only by using a computer, or an algorithm that can correctly decipher what was captured by the eye. When we use our own senses to capture images, there is more time for us to think about what we are seeing, more time to analyze what we have captured, but also more time for us to categorize it as a whole. A tool like machine vision can never fully understand what we are seeing because it lacks human sense organs.

What are the conditions and constraints of this approach? What are the consequences of machine vision’s affordances and limitations while making artwork? These questions are extremely relevant to this section of the thesis. This chapter will take a deeper look into some of the ways that machine vision systems deal with the constraints and conditions in different ways such as: how the system processes the information it receives, what limitations and affordances are present within these systems, and what they can be used for. The most relevant forms of machine vision systems that will be looked at include 3D models, computer vision, and depth sensing cameras.

3D modeling is not just a form of computer art which can simply be executed by using software, or an algorithm. 3D modeling is the creation of objects through a computer program based on a two dimensional representation of an object, and then reprojects it into three dimensional form. This happens so that the program can work out how to create the object and how to make it look as real or realistic as possible. There are different types of 3D modeling software which automate parts of modeling, while there are other programs that allow you to manually design objects from scratch. Which type you will use depends on your preferred method, the kinds of complex shapes that you want to create, and what kind of experience you want to go for with your project.

Of Monsters and AI: Modernist Monster Myths as Gothic Reinterpretations

HAL 9000, image from Author

The myth of Zombies as virus that we can’t contain and AI as virus that we can’t contain is a narrative that we just can’t shake

As monsters began to emerge from Gothic horror literature in the 19th century, the industrial revolution in England saw the birth of the steam engine, the radical expansion of an underclass - marked by the prominence of prostitution and child labor - and a general decay of so-called Victorian values - urged on by the extreme poverty that grew from an intensified stratification of social and economic classes.

“...gothic fiction registered not only the fissures in Enlightenment humanism and rationalism but also the revolutionary tendencies of the late eighteenth century. Despite these origins, critics have read the gothic primarily as a mode focused more on psychic than on historical depth and complexity. Thanks in large part, perhaps, to the legacy of Sigmund Freud, who transferred many of the gothic's most cherished conventions (buried secrets, hidden ancestries, repressed desires, imprisoning vaults, mysterious passageways) to the self, the gothic has long been seen as a mode that expresses, above all, the nightmare landscape of the unconscious, the drives of the id, the neurotic, haunted ego.”
(Gale, 2006)

Ascending from the swirl of a refinement of capitalism unseen until then, a further alienation of worker and capital that was introduced by the mechanization of production and subsequently human life, came the Gothic monster myth, in the form of creatures such as Frankenstein, the Zombie, and Dracula - all of which had tenable connections with industry, capital and class through either their physical properties (Frankenstein was made with ‘spare parts’ from the pre-modern workshop), their symbolic relationship with the labor class (Zombie as mythic counter-revolutionary force in Haiti), and Dracula as a being who consumes the blood of the Bourgeoisie as a means of gaming the class system. The rise of industrial capitalism and liberal economic policies ushered in an era of human misery never before seen - amplified by a dispensation of ethical discourse in the social and financial realms. This marked the dawn of laissez-faire economic liberalism.

While the metaphor of monster is seen throughout human history, only recently has its influence leaked into industry - a reversal of the Frankenstein situation, which could be seen as an industrial-age fever dream, emerging directly from the factory floor into the collective semio-capitalist economy. (Berardi 2012)

Gothic narrative and cultural metaphor:

These Gothic monsters set the stage for modernist interpretations

Keywords: Industrial revolution, Class, Freud, Marx, Modernism

Wombat Scat, Communism, The Obelisk and the Death of Art: Poems for Geometric Shapes.

©Public Domain, 3D model of Wombat Scat

A meandering look at the semiotics of evolutionary geometry, or how I learned to love the square

Is the Wombat a cultural producer? While most animals were carrying on making organically shaped excrement; the rabbit small ovals, the human with its variable cylinderoid and the cow with circular discs, the Wombat was busy evolving its scat into a quite tidy square shape. And while most scientists were trying to answer the question of how the Wombat became the only non-human animal able to create a cube (something about intestines this and that), the richer question is, as per usual, why Wombats create cubes of poop. What immediately comes to mind is of course, Arthur C. Clarke’s 1968 science fiction novel, 2001: A Space Odyssey, where a so-called “Tycho Magnetic Anomaly” was presented in the film’s pre-sequence to protohumans and, in the main narrative sequence, to Space Race era humans on the moon, and then later in the film’s sci-fi peyote dream post-sequence presents the object once again, pretty much in a post-modern Louis XIV bedroom.

“Q. What's that big black monolith? A. It's a big black monolith.

Q. Where did it come from? A. From somewhere else.

Q. Who put it there? A. Intelligent beings since it has right angles and nature doesn't make right angles on its own.”
Roger Ebert

The “Tycho Magnetic Anomaly”  is a 1 x 4 x 9 foot, impossibly glossy, deep black rectangular object. The descriptive details turn out to be symbolic here. The dimensions of 1:4:9 represent not only the first three prime numbers (1,2,3) squared but also imply the Riemann Hypothesis - perhaps the most important unsolved mathematical hypothesis (don’t expect an explanation here). So, the Obelisk has inherent ‘intelligence’ in it’s dimensions, but is also manufactured to a precise specification that only a ‘highly’ intelligent species could have achieved. specify this> Critics by and large agree that the monolith represents not only a calling card for intelligent culture, but also a kind of intelligence accelerator - an aspirational votive object of a hyper intelligent species from somewhere else - a species capable not only of producing the object, but of traversing space to deposit it where it stands.

Wombats carefully stack and arrange their excrement on rocks, logs or other flat elevated surfaces to present the content as a means of communication. The flat surfaces help so that they don’t roll away or blow off their locations in windy conditions. The furry creatures could be signaling social messages or reproductive status with the cubes of excrement, and being able to vary the flat surfaces give some control to the amount of messaging-scent released to their audience.

Hierarchy of square objects as social calling cards

Wombat + Poop = Social standing

Alien Species + Monolith = Intelligence accelerator

Mao + Brick = Mao transformed the brick into a Socialist object

Arthur C. Clark revitalized the rectangle as a synonym for human intelligence

Malevich destroyed art with a (the) Black Square

Grids made spreadsheets and spreadsheets made the world

© Public Domain, Visicalc, the world’s first Spreadsheet program was released by VisiCorp for the Apple II Computer in 1979, the same year as Kraus’ essay on the grid was published.

‘The book will kill the edifice.’
Victor Hugo

The spreadsheet kills everything into two-dimensional formulas and with more efficiency

Grids made Spreadsheets

When Rosalind Kraus wrote in her scathing 1979 critique of the grid as having succeeded in announcing ‘modern art's will to silence, its hostility to literature, to narrative, to discourse.’ and ‘It [the grid] is what art looks like when it turns its back on nature.’ she may well have portended the coming of the Spreadsheet. If the deep saturation of the grid as an aesthetic system declared the ubiquitous acceptance of the anti-real, anti-natural and anti-narrative in art and culture1, then the spreadsheet might be aptly named as the ultimate call sign for postmodernism and the the deconstruction of ethical frameworks, the amplification of a hostile perspective on nature and the willingness to play with precarious moral checks and balances that culture previously relied on in world-building. The affordances that the spreadsheet brings - to be able to play carelessly with the factors vying finance and ethics against one another - brings a numbness, an insulation from consequences less tangible from a pre-digital, pre-data visualized, pre-Excel spreadsheet world.

The spreadsheet is a perfect and ultimate form of the grid system - not only in it’s obvious visual characteristics; endless rows and columns of infinitely configurable rectangles lined up

‘Contemporary infrastructure space is the secret weapon of the most powerful people in the world precisely because it orchestrates activities that can remain unstated but are nevertheless consequential.’ 
Keller Easterling

What consequences of the spreadsheet can be found in the incessant toggling of profit vs ethics.

Now not only buildings and business parks but also entire world cities are constructed according to a formula—an infrastructural technology. Easterling

How do spreadsheets build the world?

The word “infrastructure” typically conjures associations with physical networks for transportation, communication, or utilities. Infrastructure is considered to be a hidden substrate—the binding medium or current between objects of positive consequence, shape, and law. Yet today, more than grids of pipes and wires, infrastructure includes pools of microwaves beaming from satellites and populations of atomized electronic devices that we hold in our hands. The shared standards and ideas that control everything from technical objects to management styles also constitute an infrastructure. Far from hidden, infrastructure is now the overt point of contact and access between us all—the rules governing the space of everyday life.
Keller Easterling

In what ways can spreadsheets be 'infrastructure '?

Spreadsheet as the ultimate form of alienation

While Marx theorized various forms of alienation do you think he could foresee the spreadsheet, the ultimate form of alienation?

In the retinal afterglow is a soupy matrix of details and repeatable formulas that generate most of the space in the world—what we might call infrastructure space. Buildings are often no longer singularly crafted enclosures, uniquely imagined by an architect, but reproducible products set within similar urban arrangements. Easterling

In what ways can ‘'repeatable formulas' can include spreadsheets?

As repeatable phenomena engineered around logistics and the bottom line they constitute an infrastructural technology with elaborate routines and schedules for organizing consumption. Ironically, the more rationalized these spatial products become the better suited they are to irrational fictions of branding, complete with costumes and a patois of managementese. Easterling

Market promotions or prevailing political ideologies lubricate their movement through the world. Easterling

Disposition is immanent, not in the moving parts, but in the relationships between the components. Easterling

Disposition of spreadsheet? Allow for not before realized comparisons of variables devoid of ethics. Spreadsheets are the free zones of economics.
Zizek and alienation


Square vs the World: Seeing like a Machine

Waves 001, Davey Whitcraft, 2022. Video, dimensions variable, 1 minute running time.

‘The shape of things to come.’
Eckbo, William W

‘Observe that rectangles must occur at the start of everything; they are the first form we make when making form.’
Mondrian, Piet. The artist who defined modern art by pitting square against circle . p. 180

How to interrogate the square? From the brick to the pixel, I see rectangularity as a divisional principle between biological and machine based vision systems.

The square as abstraction
The square or rectangle is the ultimate abstraction. This is readily observable in art, design and architecture - and can be seen in the works of Cezanne, Mondrian and deeply rooted in modernist design and architecture, coinciding with the industrial revolution and building which also was square based. More camouflage however, is the presence of the square in soft power  system, where it’s harsh angles lurk ominously in the construction and organization of everyday life, culture and thought.

‘The square, apart from its obvious use in geometry, is the basic unit of measurement in most western cultures, and it is also the basis of architecture.’
Wikipedia Square article p. 7

‘In architecture 'the square' is an important but sometimes minor element in a plan. A rectangular room is easily made by cutting three-dimensional objects into four equal parts (e.g., with a saw or chisel). The simplest, least expensive way to construct a room with an exact square floor plan (and two identical walls) is to cut two identical triangles into four equal parts.’
Dixon p.

The square as a universal
Can an argument be constructed which proposes the rectangle as not only a universal shape in art, architecture, and design, but also in thought, behavior and cultural disposition. This essay will argue that the square is quite literally a state of mind.

‘Cezanne's entire life was a constant struggle against external reality, which he wished to replace with his own subjective version.’
Duchaine p. 159

‘Cubism revolutionized art through an exploration of one-point perspective, objective chance and rationalized abstraction.’
Wikipedia Cubism article p. 1

It is also a criticism of square based art forms such as painting, sculpture, architecture and so on. A number of influential designers and architects at the start of the 20th century were proponents of making rectangles a central factor in modern-day design philosophy. They believed that abstract designs need to be based on real-world applications, otherwise they would not hold up against their surroundings. The square has been so common throughout history that it may be perceived as mundane, especially if the concept was introduced well before its time.
Ramanan et al. (2007) copyright IEEE

Image credit TC

Machines See in Squares
In human culture image recognition is fundamentally important - being able to identify food, friend or foe is crucial for survival. It is both not surprising and slightly disappointing then, that so much energy is put into giving machines the ability to mimic this human function of seeing, identifying and classifying images. Machine vision systems are trained using dominant cultural sensibilities to do something that is A) Of very little value to machines themselves, and B) By definition a particularly human quality - that is, a cultural construct (the categorization of objects into symbolic meanings). Machines simply don’t see this way.

Let’s take a look at how a typical machine vision system works. The system consists of 3 main components:
Using a method called Hough transform, a binary image is transformed into lines and points. These point are then used to identify if the specific pattern detected occurs significantly more than it randomly occurring line segments.

This output will be used to determine if an object matches the pattern which has been defined as that object. If so, the object is considered to be that object and one detection will be recorded. If not, another set of possible objects will be presented for detection to the machine vision system, and so on until it exhausts all possible matches for objects for that particular image.

Most of the time, the number of matches recorded is a very small random amount. The new algorithm that was developed for the ACM SIGGRAPH 2010 Challenge attempts to change this by deciding which set of points to use as qualifying lines or points.

In use, an object will be classified with an error rate of less than 0.01%. A human operator is required to label each decision as either 'successful' or 'not successful', but this classification can be automated with some work.

Recent work has shown that the error rate is lower than 0.1% using features based on the Hough Transform (an algorithm developed in 1842 by Thomas Young) alone.

A system that can accurately detect objects that are positioned within a 2x2 area requires some way of creating these positions. A possible method would be to create a grid that can contain all possible combinations of 2x2, 3x3 and 4x4 for input. Another method could be to use images as a source, and then determine what part of the image is closest to each particular object in 3-D space.

Image recognition systems use differences in sensed light - translated into a pixel map - to detect shapes - which are then compared to a database of shapes and analyzed for probable matches. The difference between the raw light and the final pixel map that represents the sensed light can be used to determine locations of objects in 3-D space. Many methods have been developed to map this input into a final output.

When working with images, one of the fundamental activities is capture, or generating a digital representation of an image. The aim of recognition is to accurately compare a sensed image with some library of images and generate an identification record for the unknown image. This ID record is then used in further process - such as selection or classification - to perform some action such as categorization for further analysis or tracking for later retrieval.

A number of different methods and tools can be used for capturing images. There are several concepts at play in these systems. Firstly, the ‘edges’ of objects are used to determine background/foreground/subject. The systems look for differences in pixel value using a histogram.  Shapes are then composed based on detected edges - almost always a variation on a rectangle. In order to compare a new image with an existing library, the system will create a number of features automatically (such as edges and corners) and then compare these features with those of known images. It is possible that features will be mapped onto other features (such as mapping a line onto a rectangle’s edge) or discarded altogether.

There are two types of recognition systems: template matching and feature recognition. Template matching is used when the estimated size of an object is accessible - that is, it can be expressed in terms of pixel dimensions. The systems have a profound relationship with the limitations or characteristics of human industry, which also basically relies on variations of rectangles for simplifying mass-production. Google Earth and Google Maps both use template matching for their 3-D imagery. They have input the height of a number of locations from space and use this to identify the topography.

Feature recognition is used when objects are irregularly variable and do not have fixed size or shapes. This may be due to three-dimensional aspects of an object, but also because a machine vision system is unable to determine accurate dimensions of an object.

It was found in research that people’s eye movements when reading consist of saccades (rapid, ballistic movements from point to point) followed by fixations (at rest).

Recently, a study by a group of scientists from the University of Oregon showed that brain scanning techniques may be used in conjunction with computer vision systems to successfully provide information about the location of eye-movement during reading.

The current paradigm for understanding visual perception is based on the assumption that the world can be understood in terms of intrinsic features; that is, features that are naturally present within every object (such as its color, shape and size), and are independent of how it is observed by an observer. This pre-supposes that all humans perceive the world in the same way, and have fully developed brains capable of perceiving these ‘intrinsic’ features.

What can be said about the rectangle dominant culture of human industrial production and machine vision? In short, this paradigm does not take into account that there are many ways of ‘seeing’ and observing. First, it ignores what has now been shown to be more than likely - that individual brains have not fully developed (a startling proposition), and therefore their potential cannot be fully exploited. In addition, it overlooks the fact that the world is hugely varied, and so accurate generalizations cannot be made about how all humans will perceive objects (see previous comment). A further problem is that this paradigm pre-supposes that there are intrinsic features within an object (such as its color, shape and size), when in fact these may vary according to an observer. "It is a universal truth that most people cannot readily see the shape of an object. Whether the objects are circles, spheres or squares, they are often seen as having a pattern of gray shapes on a white background. The human brain is so built that it can only make sense of people and objects by ‘convincing’ you that what is actually beyond this limited space-time frame is something else. This trick relies on the mind’s great flexibility and ability to create plausible explanations for what it sees - even if those explanations contradict reality. The brain uses these explanations to fill in the gaps and to help create a reality that is more meaningful, understandable and manageable.

A rectangle is the simplest way to create a rigid, rectangular structure. And so it became a dominant motif in the design of almost all industrial structures, especially with wood and metal. Indeed, one of the most common metaphors in human industrial culture is that of ‘a frame’ which then serves as an organizing structure for larger constructs; it helps to understand how things are related. A frame determines what can be contained within it, and thereby becomes a metaphor for understanding broader aspects of life because of its inherent boundaries (see also Frames (social sciences)).

For the purposes of this discussion, it is important to note how rectangle (or square) based systems of production (such as the machine vision system used for Google Earth images) are dependent on mass-production for simplifying their components and processes.

Interestingly, by-and-large the biological format lacks rectangularity almost totally. By-and-large, biological shapes tend not to be rectangles or squares. In fact, even at the microscopic level, living things tend not to have a rectangularity to their structure. Instead they have curved surfaces, and irregular dimensions. For example, a typical virus consists of a number of amino acids arranged in helical patterns within protein shells. The human body becomes an example of this by virtue of its irregularity, and in relation to the fact it can contain any number of features. For example, it has been shown that individuals cannot perceive thousands of objects simultaneously as distinct objects. Instead, they see these as overlapping parts of a single shape. Furthermore, their spatial perception is also highly developed, so that there are two diameters in one field of vision; dividing the right and left hemispheres by the optic chiasm and splenium:

Of course the pixel itself is also a rectangle. Interestingly, by-and-large the biological format lacks rectangularity almost totally.

What kind of cultural qualities can be observed from the idea that machines have the most trouble seeing/understanding biological entities?

First, it creates a new insight into the idea of constraints (or limits) on what can be observed. That is, machines have difficulty understanding unpredictable and changing behavior, whereas life is highly dependent on these qualities. In short, machines produce rectangular and square products which are easier to deal with because they are predictable. This suggests that machine culture and biological culture are inherently different.

Second, this realization that machines can only see rectangles presents a challenge for future robot vision systems - to make them capable of extending their reach beyond this culturally determined limit. Third, this raises an issue about the meaning of intelligence – that is, whether we can use technology to create intelligent machines. The idea of the intelligent machine has been widely debated in academic and popular science studies. Many have commented on the limitations of the ‘intelligent’ machine as a postulate for aid to human beings.

The findings discussed here suggest that biological entities are not easily understood by machines. They also highlight that these entities tend not to possess rectangularity, and therefore cannot be easily scaled or produced as products (for production lines). Instead they are highly variable and unpredictable because they are produced by organisms using a range of sensory inputs (sight, taste, hearing and touch). In addition, because organisms are highly adaptive, and because they have the capacity to use their intelligence to overcome obstacles (such as those created by rectangularity), the biological entity is not only unpredictable but also resistant to control. Furthermore, it has been shown that this resistance can result in a dangerous feedback loop when machines try to exert control over life forms.

Machines may be unable to see and understand biological entities not just because of the shape of these entities (which is unique for each entity), but also for their complex behavior and changing nature (rather than being an unchanging entity). In 1964 Herbert A. Simon coined the term ‘satisficing’ to describe a new way of thinking about human behavior. He introduced it as an alternative approach to maximizing economic profit. In this, he suggested that maximizing profit is not always the best way to think about decisions. Instead, he suggests that people often try to find ways of doing well enough at what they do (instead of trying to be the best). One example of this would be a basketball player who has good enough moves on court, rather than the best moves. Another would be someone who is able to satisfy a customer by delivering goods within an acceptable time frame, but not in the fastest way possible.

It almost looks like a kind of alternate ecology is emerging from this developing system, of which machine vision and image recognition is the final link, the unit which is needed to close the loop of humans dominating nature, themselves and god completely. In other words, human's idea of mastery of the earth has taken another step. In many ways, an alternative interpretation of this anthropomorphizing of data (the tendency to see it as ‘human shaped’) is that data is more than just abstractions. It is something that has been given human-like qualities because it appears ‘human’ in its intent and capacity to act on people. This can be understood with the help of Max Weber’s notion of Verstehen:

In short, we understand machines because they act like other human beings (even though they are not human). Thus, their shape and the way they interact with us assumes the characteristics of a physical entity – such as having a face, eyes and a mouth. However, we should not underestimate the extent to which people know about machines in terms of their shape and form. Studies have shown that as well as looking at machines in anthropomorphic terms, it is possible to describe these with inventories of their physical attributes. For example, the ‘older’ and heavier a machine is (for example a computer), the more likely it is to be talked about as a solid/massive entity. This is similar to the way in which older people are often portrayed as solid, rather than young and nimble.

This suggests that even though machines have been created to remove human beings from the production process (because of the danger involved), it does not mean that humans have been completely removed from these processes. Instead, our desire for predictability and control has led us to create machines that look like humans. In other words, we have replaced human beings with machines (instead of just replacing ‘worker’ with ‘robot’), because they also provide us with a sense of predictability and control. However, this is not the only way in which humans interact with machines. Humans can also interact with machines using human-like processes (such as asking them questions or waiting for them to reply). These are consistent with the idea that people have a tendency to anthropomorphize machine vision.

As shown by studies such as those conducted by Josef Kittler and Burghard Knapp, there is a tendency to anthropomorphize computer models. This is demonstrated by the way that they are often used to explain visual perception, cognitive processes and communication. Many of these early computer models were designed as representations of real world entities; they attempted to make sense of the world around us in symbolic terms (for example describing images through words or numbers).

The results of these kinds of computer models suggested that they were capable of carrying out highly complex functions, including recognising things. These models were used to support the case for behaviorist accounts of perception and cognition. For example, many have tried to use them as a case against representational accounts of perception and cognition. However, as many authors have argued in response to such claims, there is a degree of ambiguity in what can be said about machine vision.

The Biology of Seeing: How We Sense the World. Keller, Thomas (2009)

In his work on software and systems, Stuart Kauffman elaborated upon this recognition of the importance of shape by remarking that 'by increasing numbers and complexity, not merely do things get more elaborate, but they become less regular. The result is that they become more difficult to describe and manipulate’. He added that ‘lines between shapes blur’. In this way he is suggesting that the concept of binary distinctions (between shapes) may be difficult to maintain in a complex environment.

That is, a world is emerging in which the most recognizable and readily detectable surfaces, environment, objects and informational flows are created by machines (with varying human intervention in the process), are mostly for machines (with humans as end-users) and if not completely detectable only by machines (in the case of machinic ‘images’ which are never abstracted to the visual) then are at least more readily (or just equally) understood by machines than humans. In this sense, the phenomenology of vision is not only a human endeavour. Rather it is an increasingly complex and abstract one with multiple layers and interfaces.

In a similar vein, Mark Hansen notes that: ‘In the contemporary moment we are seeing the "pre-cultural" becoming visible for the first time’. By ‘pre-cultural’ he means ‘that which precedes language and other cultural systems – as well as its systematic organization in language itself – that which precedes code: what remains irreducible to our codes’. This observation is supported by the fact that machines and human practices interact, at least in part, through codes (for example computer languages and internet protocol).

Likewise, while it remains true that we can only recognize the world in terms of its binary elements (the distinctions between objects), the way in which they are perceived is now being shaped by a process of ‘becoming visible’.

Is it then a world in which machines create machines to be seen by machines to create better machines? The boundaries between the human and the machine have been blurred in certain areas. Technologies, such as 3D printing, may allow humans to create and customize objects for whatever purpose they might wish. However, this technology is not unique to humans – it has been used by other semi-autonomous entities (for example by companies such as Replicator). Similarly, and most obviously 3D printing allows us to ‘print’ our own limbs using reprogrammable mechanical systems. It is frequently cited that this technology could allow us to upgrade our bodies – perhaps in ways comparable to upgrading a computer. This would be an example of human-machines co-evolving. Moreover, these ‘upgrades’ would be a practice in which humans are in varying degrees playing with machines within an environment which is itself a product of creative technics.

In this sense, new modes of visual experience can also be considered as a product of machinic evolution, or at least as evidence of ‘evolutionary’ processes. In The Evolved Apprentice David Kirsh suggests that the ability to make ‘good’ visual judgements is the result of learning through natural selection – a process which involves making mistakes (i.e. not making decisions that directly contribute to survival). In this way, we can see the development of a means of visual communication that allows us to ‘comprehend’ the world through senses other than vision. It is possible to use computer languages and algorithms that allow us to change the colour of objects, or alter the texture or texture-less surfaces (for example in a plastic). These are changes made possible by machinic evolution – algorithms which allow machines to recreate themselves but within an environment that is still defined by humans. Understood in this way, visual experience is not simply a property of machines – it is also a product of interactive knowledge-making through a co-evolutionary process that allows humans to be influenced by machines.

Note also that the evolution of images in the art world has been seen as one element in the creation of new forms. The digital transformation of images into virtual reality has been called ‘a sophisticated form of virtual sculpture’. In this context, it is possible to see photography and other representational practices as being used by machines within environments which are themselves products of machinic evolution.

‘What is the world? That which can be touched, seen, and described in art.’

‘A rectangle is the simplest form of geometry.’

What has machine vision revealed that was already there but not necessarily available to human perception and comprehension?

It has been noted that while the environment is being recorded, it is also being transformed. The process of recording and processing can be seen as a kind of thinking in interaction with the environment:

To start, the visual conditions that image recognition systems prefer - the stark contrasts and geometricivity of industrial production only emphasizes the aesthetic dominative tendencies of industry and capitalist economics - and necessitates a look into the aesthetics of mass production, labor surplus and profit dynamics. Image recognition systems proclivity towards objects of mass production bifurcate the biological and non-biological through the unique sensing characteristics of machine vision. A convergence of machines and biological systems, under the conditions of biological capital, produces a perceptual root that keeps the autonomous machine vision system in check. Its regulation is performed by the economic rate of profit and by the uniquely human ability to socially recognize emotions and intentions.

In a construction on David Harvey's work on Neoliberalism, this paper attempts to demonstrate how mass production of objects can be understood as an aesthetic regime of accumulation not only in terms of capitalist relations but also as an aesthetic regime that distorts or "de-poetizes" nature, producing images that are aesthetically based on anything but natural forms. This production is then understood to produce technologies that reproduce and further the economic-political domination and control of nature. It is argued that this aesthetic regime produces not only a need for the mass production of objects and the accelerating rate of technological change, but also a need to produce technologies that de-poetize nature through image capture and reproduction. The reproduction of images necessitates machines with the ability to recognize pre-defined icons, which are in turn dictated by their economic function as designed by humans. This programmatic idea of human's naturalized representation of nature is contrasted with science's ability to "see" life in a different way - and point out the implications they have on aesthetics. To close this argument, an example is provided that demonstrates how machine vision systems are produced as aesthetic objects themselves and open up new problems in regard to aesthetics.

In such a way, the aesthetic regime of accumulation is not only produced through capitalist economics but also as an aesthetic regime that simultaneously distorts or "de-poetizes" nature, producing images that are aesthetically based on anything but biological forms. This production is then understood to produce technologies that reproduce and further the economic-political domination and control of nature. It is argued that this aesthetic regime produces not only a need for the mass production of objects and the accelerating rate of technological change, but also a need to produce technologies that de-poetize nature through image capture and reproduction. The reproduction of images necessitates machines with the ability to recognize pre-defined icons, which are in turn dictated by their economic function as designed by humans. This programmatic idea of human's naturalized representation of nature is contrasted with science's ability to "see" life in a different way - and point out the implications they have on aesthetics. To close this argument, an example is provided that demonstrates how machine vision systems are produced as aesthetic objects themselves and open up new problems in regard to aesthetics.

To start the argument, it is necessary to examine Harvey's discussion on Neoliberalism, which serves as the basis for further discussions regarding the aesthetic regime of accumulation. It is argued that Harvey's understanding of the concept of Neoliberalism provides a way to understand how biological systems, which are under the conditions of biological capital, produce aesthetically based images. It is proposed that the production of these aesthetic images through machine vision systems are analogous to the pre-determined ways in which humans represent nature, creating a means by which these aesthetic regimes of accumulation can be further discussed.

Harvey conceives of Neoliberalism as an aesthetic regime in two senses. Firstly, it is an aesthetic regime that not only makes it possible for capitalism to continue as a world system, but also for its reproduction and intensification. Secondly, the aesthetical nature of capitalism lies in the production of subjectivities that privilege the market as a way to construct meaning and value. In light of this idea, it is important to point out that aesthetics are not just for art or entertainment. They are part of everyday life - and specifically have a role in capitalism's transmission, reproduction and intensification as an aesthetic regime. This notion allows one to make sense of how machine vision systems function: as a part of everyday life, they are also aesthetic objects whose reproduction exists in the service of intensifying capitalist relations.

Furthermore, Harvey outlines two terms that will help to further demonstrate the economic regulation and aesthetic regime produced by machine vision systems: capital accumulation and cognitive surplus. Capital accumulation is the "process by which [capitalist] relations of production, distribution and consumption constantly expand..." (Harvey, Neoliberalism). Cognitive surplus is the amount of time that humans have to think about things other than work. Harvey discusses it as a byproduct of the telecommunications technologies made possible by technological developments, which makes possible what he calls "social labor without social relations" (Harvey, A Brief History...). The amalgamation of these two terms will allow one to discuss machine vision systems in terms of their affect on humanity's cognitive surplus.

To understand how machine vision systems affect human cognition, it is important to first understand the production of these systems. It is suggested that these systems are produced through the continual production of aesthetic objects, which in turn produce the conditions for their own continued existence. This is accomplished through machines that recognize pre-defined icons and are designed to recognize and reproduce them.

The issue of mass production is an important aspect to understand when continuing this argument. Here it is pointed out that machines as a part of everyday life and as an aesthetic object, both contribute to the creation and intensification of social labor relations. The cultural demand for these types of digital image capture techniques provides a means by which these systems can be mass produced at affordable rates, putting the technology into widespread use in the commercial sector. Because of this, it is also important to point out that the commercial sector begins to produce products that are geared towards these technologies in order to increase profit margins. One can consider this commercial use a case of "planned obsolescence," where machines and materials (like human bodies) are made to have a short lifespan so that they can be replaced. These technologies are then further employed through the commercial market and in turn produce more biological capital for those who benefit from capitalism: in this case, those who own the means of production, distribution and consumption.

It is argued that because of this, the images captured by machine vision systems are not only inherently part of the aesthetics produced by capitalism, but are also ways to make more effective the processes of accumulation. This is demonstrated through an example in which a house image recognition system was designed to identify and track people who enter a private building. Here it can be understood how these images are constructed as "hidden" within everyday life. However, when one considers them in relation to mass commercial technology that underpins machine vision systems - one has to start thinking about what such technologies could do in another way: they can be used as a means of social control and surveillance. Such technologies are made possible through the commercialization of machine vision systems, which in turn produce an aesthetic regime that has been characterized by social control and surveillance.

It is argued that this understanding can help to further understand how humans are producing a new "aesthetic regime of accumulation" through their use of machine vision systems. This regime is marked by two important features: the production of genetic information, which include images like those produced by machine vision systems, and the transformation of these images into data - which can be further transformed into economic value. Thus, one has to start thinking about how these images are a product not only of aesthetics made possible by capitalism but also a means to generate more capital as well as reproductive and productive processes within society itself. Thus, one can think of the design and implementation of these systems as part of a larger political economy.

In order to fully understand how these systems are an aesthetic object that produces a new "aesthetic regime of accumulation," it is important to point out that they do not only produce images, but also affects humans in other ways. One way images affect humans is by influencing their behavior. This is exemplified through identification technologies, which are now being built into passports and drivers' licenses in many countries around the world. These images can be processed through facial recognition technology - used for security purposes - or through gait recognition technology - used for tracking purposes. The second way that images affect humans is through the sense of self. One can consider the image one keeps on Facebook, or regarding the occurrence of "selfie" culture. These images are produced by users themselves and are then used to affect their own behaviors. These are not necessarily produced by machine vision systems, but they exemplify how image production can influence human behavior in other ways that go beyond its production as an aesthetic object. In fact, one can say that such affects are only possible because of machine vision systems' ability to produce images in a widespread fashion.

The concept of cognitive surplus is also important to understanding the way in which machine vision systems play a role in shaping human behavior. One can understand cognitive surplus as the ability to produce new ideas, new ways of doing things and new ways of seeing the world. Thus, cognitive surplus is a sum total of one's ability to think and act in new ways. A function of cognitive surplus is that it can be used to produce more capital (what Marx terms "labor power" in Capital). In this sense, one can consider how cognition provides a means by which humans can become more productive members of capitalism.

By understanding how these images are produced through processes - both aesthetic and economic - it becomes clear that they are productive as well as reproductive means used by humans to shape their own behavior. This means that one cannot simply separate the use of machine vision systems as aesthetic objects from their effect on humans. The production of image-based cultural products that are used to shape behavior and reproduce capitalism is part of what it means to be human within a capitalist society.

Thus, while the question of the "aestheticization" or "objectification" of humans by machine vision systems in Chapter 5 can be justified by claiming that they are an aesthetic object, this argument has much broader implications for understanding how such devices affect human behavior and social relations. As part of a larger political economy, image-based technologies are ways by which humans are able to produce more capital, through the production and reproduction of images.

‘The rectangle is the basic unit of measurement in most western cultures, and it is also the basis of architecture.’
Wikipedia Rectangle article p. 7

‘The square is the ultimate abstraction in art and can be seen in the works of Cezanne and Mondrian.’ Duchaine, Jean-Paul. Cubism: Cubist Painters , 2nd ed. (Boston: Bedford/St. Martin's, 2002). The Shape of Things to Come p. 132

When one considers what it means to become an object that can be owned and used by others, this has wider implications on the processes that enable capitalist accumulation. An important concern in this regard is that these technologies will enable those who possess them to control more capital - and thus increase their own wealth - as well as how they are likely to impact on their own audiences and uses (cf. Chapter 5). This is because the production and distribution of such images involves those who benefit from capitalism but also those who work in industries like spy technology. In the end, these images and people are commodified as "things" - not unlike the way that slave labor is treated in societies where slavery exists. To treat humans as objects and to turn them into images is to make them into a commodity that can be owned and sold by others.

This commodity status of humans can also be extended to include their bodies as well. When one considers how technology has turned humans into objects for production, this concept takes on a new meaning because everything about being human is now part of how humans are produced for other people's interests, whether for labor production or for 1%'s capital accumulation. This idea is exemplified in the way that machines can now be used to produce human skin. This is an example of another "machine aesthetic regime" - skinning machines - like the one discussed in Chapter 5.

This seemingly inhuman quality of such images becomes evident when one considers how they are used as a means by which people can present themselves as commodities and in turn, have a larger role within capitalism. One can say this happens in several ways that are exemplified in spy technology, for example: through being able to produce images for identification purposes; through surveillance technologies; and through information technologies that make it easier for some people to survive the capitalist system. These three subsections are discussed at length elsewhere (cf. Role-Playing Games).

Historically, animals have specific ocular qualities which define/are defined by the role/position they hold in their environment - predator have eyes located on the front of the head to be able to detect depth well, a necessary visual function for hunting, while prey have eyes toward the sides of the head to have a wider field of vision and therefore are able to scan for danger more adeptly. In the contemporary animal-machine system, however, the eyes become very important in defining how humans operate and function within a certain environment. In fact, for many animals, their eyes - as part of their ocular apparatus - has become an important way to identify one's species.

In this sense then, the "wrapping" of a human by an animal skin is an image that defines its work. It also defines its status as a commodity so that it can be used with other commodities to create other images that can then be used in the future. The primary role of animal skins is to make humans productive elements within capitalism and to help define what it means for humans to be part of capitalism at all.

However, when one considers how technologies of image production have been used to develop and encourage new markets/markets for images and image-based products, it becomes clearer that to be human within capitalism is about using images in a productive way for the purposes of capitalism. New ways of creating and distributing images are being created as a means by which humans can not only present themselves as commodities but also use such images as a way to survive within the capitalist system. Recall that this book was originally conceived as a method through which I could present myself as a commodity - in this case, an academic commodity - and make some money on the side.

Alas, what does the unique visual abilities of our machinic observers - with all the algorithms, lenses, pixel planes, neural networks and deep learning-enabled surveillance platforms tell us about where they can be located in a sort of nature/culture hybrid order? Where do such visual capabilities of animals and humans intersect with that of machines? It is important to remember that we are focused upon the "where" because of the issue of identification. Who defines/defines images and who owns/owns them? In order to present something (as a commodity or as an image) it has to be able to be identified: metaphorically (as a thing) or literally (through something like an ocular apparatus).

This is how these new technologies create new markets and how they better help those who create them. This is also how these technologies enable those who use them in their work to become a more productive element within capitalism - because they help them identify better.The Aesthetics of Surveillance - this phrase is still a good way to picture what is now possible and how some people use such technologies to produce their own "aesthetic" of being. Recall that the word "aesthetics" is derived from the Greek word esthetics (εστιξις) which means "perception." So, an aesthetics of surveillance means seeing how one can be productive while looking at what others are doing - and vice versa.

The distribution and production of images have always been a part of capitalism. Such images are not just used by humans as a means of product representation, however, but also can be used as an agent in the process through which capitalism is able to produce new markets/markets. In other words, images can not only be used to present oneself as others desire you to be but also make money for those saying or producing them (cf. Chapter 7).It cannot be as meaningless as some would say (the public who don’t read software agreement fine print) and as meaningful as AI anti-bias advocates report. Even describing the kind of mixed biological and non-biological ordering to include is a project unto its own - and would require a drastic remapping of hierarchies that we biological beings, limited by thousands of years of ideological programming, are just not equipped to accept.

In the current moment in time, the only ethical response to this technology is some form of reflexive anthropocentrism - which doesn’t really mean anything (because it’s probably as meaningful as non-reflective anthropocentrism) but if it means paying careful attention to the properties of one’s cultural system then it is a valid response.

‘This article explores rectangle evolution and how it came to be such a universal shape that can be seen everywhere.’ Blucher, David A & Dana Cavallo. Designing for Behaviour Change: Interactive Design Principles for Successful Social Change . (New York, NY: New Riders, 2012) The first use of the expression ‘Rectangle vs. the World” is found in page 4, footnote 9, where David A Blucher is describing the scope of his research: “My intent was not to identify a universal design solution for every rectangle but rather to explore the actual and potential uses of rectangles across many domains (e.g., travel, medicine, business, education, etc.) The rectangle vs. the world theme originated from my observations that rectangles are everywhere and can be used in many ways beyond their more traditional applications.’

‘The rectangle is an immense subject… It is a little difficult to take it all on. The rectangle contains more lessons for designers than I can possibly enumerate.’
Eckbo Style p. 45

People are amazed when they see a square in nature


Rectangularity – Refers to the extent to which an image is square or rectangular. For instance, a square image may be described as rectangular since it has four main dimensions (width, height, depth and diagonal). How are these dimensions measured and how are they compared? Is one dimension more important than another? What other objects can be described in terms of squares and rectangles (incl. triangles?)

Machine vision – The ability of machines to sense light and interpret that data into computer data such as pixels.


1.Bhanu, M., and Pollard, A. (2010). "A Biological Approach to 3D Scene Understanding." International Journal of Computer Vision 86:7–25

2,Bhanu, M:Image Processing using biologically inspired deep learning models, Phd Thesis, University of St Andrews (St Andrews) 2012 ,



http://www1.uwm. edu/~chibitze/papers/Chibeze73_Ahmed_An.pdf




Annnd here is a filter that pixelates humans:

A Toddler at the Piano

Image created by author using with the prompt ‘A toddler at the piano’ 

‘A toddler at the piano may hit a novel sequence of notes, but they’re not, in any meaningful sense, creative.’

Sean Dorrance Kelly  

The current landscape of AI based Art software have one thing in common; they have all excluded the input of artists in the early stages of development. Subsequently, it is hard to find signs of creativity within them. A depressingly narrow and outdated worldview of art is exhibited.

Most or all of the narrative about AI and Art is being framed by a narrative about art that is coming from the ‘technologists’ rather than the artists. Even art world texts naively use the terminology and worldviews that are created by technology companies. I propose a more open Arts and Humanities (LOL) based look at Art and Artificial Intelligence.

AI style transfer systems are more of an image archive than a creative tool. The actual artistic realization, the creation of the ‘artistic aura’ occured upon original creation of the artist. The systems are creating images in the style of some major art historical movements which reside deep in the annals of art history. There is a big difference in making images in the style of images already inscribed into history, and making innovative art. If we go back in time and to when the artists from these historical movements were working. They were essentially misusing their tools, and working pretty far outside the norms of other artists of their time. The work they were creating was almost abiogenic, arising from an unseeable dimension.

A tool that can only simulate past aesthetic creations is innovation in reverse.

For the AI-powered text to image systems, the viewer’s delight is in the model’s inability to understand the prompt. It’s the computer’s inability to understand the prompt that creates the sense of delight and humor. This is a new aesthetic, one where the aesthetic object is not the image produced, but rather the spectacle of a machine failing. This is, of course, not a new phenomenon.

The history of artificial intelligence is littered with examples of this sort of thing. In the 1950s, the cybernetician Ross Ashby built a machine he called the Homeostat. The Homeostat was a simple feedback loop: it would take in a signal and then try to minimize the difference between that signal and some goal. The Homeostat was not intended to be a model of the brain; it was meant to be a general-purpose learning machine. But it quickly became clear that the Homeostat was not very good at learning. It would often get stuck in a local minimum, unable to find the global minimum.

‘Machine Learning and Artificial Intelligence are inflationary terms’

‘Neural Net, Deep Dream and so on are marketing terms for basic Table Lookup systems’

‘Machine learning is based on data that has been presented from before, the best they can do is adapt, and modify material that has already existed’
Sha Xin Wei. Replacing thought by algorithm, gesture by mechanism, organism by golem. 2018, European Graduate School

The AI art tools à la mode are, as Sha Xin Wei has pointed The AI art tools à la mode are, as Sha Xin Wei has pointed out, nothing more than extremely large ‘Lookup Tables’ for images; systems that can identify patterns in images and classify them accordingly - albeit on a super-massive scale. Most of the applications used to create AI art use the same underlying technology and have the same overall functionality. These so-called ‘Style Transfer’ systems have been trained on a large amount of images in a certain style, the painter Claude Monet, for example, and can discern whether or not an image contains the visual characteristics typically found in Claude Monet’s paintings. The software can examine things like brush strokes, color pallets and more. The process by which the software ‘learns’ (compares, identifies and classifies) image styles has some visual by-products, as shown in the psychedelic puppy images created by Google’s Deep Dream project (a neural net system trained to identify images of dogs). These by-products are a sort-of ‘not quite realized’ visually averaged version of the sum of the images being fed to the system.

The ‘style’ an AI art tool produces is a function of the training data. The more data, and the more varied the data is, the more ‘styles’ the AI art tool can produce. The current state of AI art is one in which the style is predetermined by the training data. The artist has no agency in the creative process. The AI art tools are, in a sense, ‘black boxes’. The artist has no way of knowing how the tool is creating the images it produces. To create art with machines, we need to first understand how art is created. Art is not a product of following rules or imitating existing styles. Art is a product of breaking rules and creating new styles. Art is a product of innovation. In order to create art with machines, we need to create machines that are capable of innovation. This is not an easy task. Current AI systems are not capable of innovation. They are only capable of imitating. The first step in creating machines that are capable of innovation is to understand how innovation happens. Innovation is a process of exploration. It is a process of trial and error. It is a process of making mistakes and learning from those mistakes. It is a process of experimentation. In order to create machines that are capable of innovation, we need to create machines that are capable of exploration, trial and error, experimentation, and making mistakes. This is not an easy task. Current AI systems are not capable of exploration. They are only capable of following rules. The second step in creating machines that are capable of innovation is to understand how humans create art. Art is not a product of following rules or imitating existing styles. Art is a product of breaking rules and creating new styles. Art is a product of innovation. In order to create art with machines, we need to create machines that are capable of innovation. This is not an easy task. Current AI systems are not capable of innovation. They are only capable of imitating. The third step in creating machines that are capable of innovation is to create machines that are capable of creativity. Creativity is not a product of following rules or imitating existing styles. Creativity is a product of breaking rules and creating new styles. Creativity is a product of innovation.

‘You might have noticed that nearly all presentations of art produced with these models include the text prompt. The pleasure, it seems, is not in the image; rather, it’s in the spectacle of the computer’s interpretation.’
Robin Sloan, Notes on a Genre

‘Miller doesn’t fully explain how these images exhibit creativity. The computer makes decisions about colors and brushstrokes, but its transformations are more of a novelty act than the creation of original work.’

The obsession with testing whether AI is ‘intelligent’ or ‘creative’ that started with Turing in the 1960s has gotten old.

What exactly is at stake here?

The narrative comes from a deep fear of letting go of the anthropocentric worldview that has been since the adoption of agricultural religions.

‘Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.’
Sean Dorrance Kelly, A philosopher argues that an AI can’t be an artist,

Deep Dreaming with Nietzsche: Metaphysical Supplements

Bestiary 001. 2022, Davey Whitcraft. Film Still, dimensions variable. 

“Art is not merely an imitation of the reality of nature, but in truth a metaphysical supplement to the reality of nature, placed alongside thereof for its conquest.”
Friedrich Nietzsche, Thus Spoke Zarathustra

Doing a ride along in Google’s Deep Dream with Nietszche at the wheel sounds like a good time; what would the enfant terrible of philosophy say about Google Deep Dream and GAN-created art in general?

A possible starting point could be to look at GAN images through the lens of the above quote. If we take the first part of the quote ‘Art is not merely an imitation of the reality of nature…’ and assume that it refers to the ambitions of realism that painters of Nietzsche’s time aspired to. Painting during this period worked to recreate the aesthetic realities of nature, even when abstraction came in, they still worked to bestow a sensibility or essence of nature that was understood by scientists and artists alike.

Here we are left with the second part of the statement, which contains some heavy material to play with. So, let’s look more carefully at, ‘...but in truth a metaphysical supplement to the reality of nature’.

If ‘the reality’ of nature is contained in the study of physics, which are the constraints fixing us to earth, time and other annoyances, then a ‘metaphysical supplement’ might help to, at least temporarily, unbound us and allow us to transcend physical reality a bit. Nietszche’s ideas about Art included a kind of inverted Platonism, which regards a piece of Art as of higher value than the ‘reality’ of which it depicts, and in fact the further from reality the work gets, the higher its value soars.  I can definitely get on board with this idea, and can even see how a lot of Art has afforded these kinds of experiences. This concept seems almost tailor made for some types of Art made with AI.

“While dreams are the individual man's play with reality, the sculptor's art is – in a broader sense, the play with dreams.”
Friedrich Nietzsche, Thus Spoke Zarathustra

Nietzsche might have had some fun with the GAN-based ‘painting’ systems like Nvidia’s Gaugan 2, which allows the user to digitally ‘paint’ using paint brushes which are trained on specific categories of natural formations, like River, Flower or Tree. The image above was created using Gaugan 2 and my process involved painting the same simple line (an s-curve constrained exactly by the canvas boundaries, over and over, but swapping ‘nature skin’ paintbrush and rotating the ‘S’ by 90 degrees each time. The image contains some aesthetics from nature, but the forms and shapes are decidedly in the realm of impossibility.

“We have art so that we may not perish by the truth.”
Friedrich Nietzsche, Thus Spoke Zarathustra