域間群聚 / The Interzone Party


The Undreamable

In summer 2015, when Google released the image generator “Deep Dream”, I was studying the old visual stimulator “Dreamachine” invented by Brion Gysin. Although the ‘subjects’ who ‘dream’ are different in each case - the AI program of Deep Dream and the human using the Dreamachine, both gadgets are similar as they refer to an underlying structure, that limits what and shapes how things are “seen”. 

The Dreamachine is a device which flickers in a frequency that induces one’s alpha brain wave activity and brings one into a sleep-transitional consciousness state. According to Brion Gysin, the user is supposed to experience “an overwhelming flood of intensely bright colours exploded behind one’s eyelids: a multidimensional kaleidoscope whirling out through space” (Gysin, 1958). However, according to conversations and my own dreamachine-practice, not everybody experienced the same, people noticed things they can identify. The images induced by the dreamachine were those already in their set of made experiences. Therefore, I suppose that, rather than taking the user out of their consciousness into a Freudian subconscious, the Dreamachine reveals the infrastructure of the conscious itself, which is pre-structured by their socialization in modern society. 

A similar mechanism can be found within the context of Google's Deep Dream. The Deep Dream (http://deepdreamgenerator.com) is a technique developed by Google engineers, in order to understand the way artificial intelligence classifies and defines images -- how machines “see”. The engineers “feed the [artificial intelligent] network [with] an arbitrary image or photo and let the network analyze the picture. [...] then [...] ask the network to enhance whatever it detected” (Mordvintsev, Olah & Tyka, 2015). The Deep Dream processed images always turned out full of eyes and animal faces. The dominant appearing of eyes was not random. The database Google used to train the AI program is dominated by dog images (emptv, 2015). The algorithm may be neutral, but the data input influences the outcome. This not only applies to how machines “see” but also how they “identify”. Google engineers teach the AI program to recognize images “by simply showing [the program] many examples of what [the engineers] want [the program] to learn, hoping [the program] extracts the essence of the matter at hand, and learns to ignore what doesn’t matter” (Mordvintsev et al., 2015). For example, the engineers defined a fork for the machine: “a fork needs a handle and 2-4 tynes” by showing the machine a lot of images of forks according to this definition (Mordvintsev et al., 2015). But who defines the images at first? The image dataset which is used to train the Deep Dream program is ImageNet, a database of 14 million human-labelled images. "Each year, ImageNet employs 20,000 to 30,000 people [...] who are automatically presented with images to label, receiving a tiny payment for each one.” (Connor, 2015). After knowing how machines “see” and “identify”, through investigating the image classification program, it is not difficult to tell that the image classifying system is built on an infrastructure which is created due to human labelling.

As the socialization in society shapes what can be “seen” in the Dreamachine-experience, Google's Deep Dream becomes ‘socialized’ by humans labelling pictures. In both cases, dominant or majority ways of “seeing” and “identifying” things suppress alternative viewpoints; may even make alternatives impossible due to their simple unavailability. Even in a more liberal image classification system such as Google Images, the dominant definition still suppresses the other interpretation possibilities, which is why projects like the World White Web (Johanna Burai, http://www.worldwhiteweb.net) appear. The World White Web is a project initiated by Johanna Burai, who asks people to upload images of hands from people of colour to balance the image database of Google searching engine, within which the definition “hand” has been normed by white hands. 

To zoom out further, what is the infrastructure that determines who can participate in the definition process? The network of undersea cables which was mapped out in the Undersea Network project by Nicole Starosielski demonstrated the solid material aspect of the internet. This project reminds one that the information online is only partially representing the world. In 2014, “nearly 75% (2.1 billion) of all internet users in the world (2.8 billion) live in the top 20 countries” (Internet Live Stats, 2015). Only those who live in a place where the socio-economic situation allows them to be connected by the deep sea cables. As Vilém Flusser put it when he wrote about the apparatus, “Every program functions as a function of a metaprogram and the programmers of a program are functionaries of this metaprogram.” 

“Socialization” takes many forms. As seen in the (in-)ability of Dreamachine users to identify the unknown, also the internet, despite its often emphasized accessibility, neutrality and richness, is subject to its “censorship” and encourages ways to see things in particular ways. Google’s Deep Dream simply served as an example. Since nowadays the information online becomes a primary knowledge resource for many people (Latzer, 2015), it is important to tackle or counterbalance biased information, like Johanna Burai. Of course, this becomes even more difficult as long as the myth of internet-neutrality is convincing; and cables simply not run to everyone.