Artistic collaboration involves a certain loss of self, because it arises out of the merging of participants. In this sense collaboration questions the notion of the creative individual and the myth of the isolated artistic genius. As such, artistic collaborations can be subversive interventions into the concept of authorship and the ideologies that surround it (Smith 189-194).
Collaborations also often simultaneously superimpose many different approaches to the collaborative process. Collaboration is therefore a multiplicitous activity in which different kinds of interactivity are interlinked; this process may also be facilitated by improvisation which allows for continuous modification of the interactions (Smith and Dean, Improvisation). Even when we are writing individually, we are always collaborating with prior texts and employing ideas from others, advice and editing suggestions. This eclectic aspect of creative work has led some to argue that collaboration is the dominant mode, while individual creativity is an illusion (Stillinger; Bennett 94-107).
One of the reasons why collaboration tends to be multiplicitous is that contemporary creative endeavour sometimes involves collaboration across different media and with computers. Artworks are created by an ‘assemblage’ of different expertises, media, and machines in which the computer may be a ‘participant’. In this respect contemporary collaboration is what Katherine Hayles calls posthuman: for Hayles ‘the posthuman subject is an amalgam, a collection of heterogeneous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction (Hayles 3). Particularly important here is her argument about the conceptual shifts that information systems are creating. She suggests that the binary of presence and absence is being progressively replaced in cultural and literary thought by the binary of pattern and randomness created by information systems and computer mediation (Hayles 25-49). In other words, we used to be primarily concerned with human interactions, even if sometimes it was the lack of them, as in Roland Barthes concept of ‘the death of the author’. However, this has shifted to our concern with computer systems as methods of organisation. Nevertheless, Hayles argues, computers can never totally replace embodied human subjects, rather we need to continually negotiate between presence and pattern, absence and randomness (Hayles 25-49). This very negotiation is central to many computer-mediated collaborations.
Our own collaborative practice—Roger is primarily a musician and Hazel primarily a writer but we both have interdisciplinary performance and technological expertise—spans 15 years and has resulted in approximately 18 collaborative works. They are all cross-media: initially these brought together word and sound; now they sometimes also include image. They all involve multiple forms of collaboration, improvised and unfixed elements, and computer interfaces. Here we want to outline some of the stages in the making of our recent collaboration, Time, the Magician, and its ‘posthuman’ engagement with computerised processes.
Time, the Magician is a collaborative performance and sound-video piece. It combines words, sound and image, and involves composed and improvised elements as well as computer mediation. It was conceived largely by us, but the first performance, at the Sydney Conservatorium of Music in 2005 also involved collaboration with Greg White (sound processing) and Sandy Evans (saxophone). The piece begins with a poem by Hazel, initially performed solo, and then juxtaposed with live and improvised sound. This sound involves some real-time and pre-recorded sampling and processing of the voice: this—together with other sonic materials—creates a ‘voicescape’ in which the rhythm, pitch, and timbre of the voice are manipulated and the voice is also spatialised in the performance space (Smith and Dean, “Voicescapes”). The performance of the poem is followed (slightly overlapping) by screened text created in the real-time image-processing program Jitter, and this is also juxtaposed with sound and voice samples. One of the important aspects of the piece is its variability: the video-manipulated text and images change both in order and appearance each time, and the sampling and manipulation of the voice is different too. The example here shows short extracts from the longer performance of the work at the Sydney 2005 event.
(This is a Quicktime 7 compressed video of excerpts from the first performance of Time, the Magician by Hazel Smith and Roger Dean. The performance was given by austraLYSIS (Roger Dean, computer sound and image; Sandy Evans, saxophone; Hazel Smith, speaker; Greg White, computer sound and sound projection) at the Sydney Conservatorium of Music, October 2005. The piece in its entirety lasts about 11 minutes, while these excerpts last about four minutes, and are not cross-faded, but simply juxtaposed. The piece itself will later be released elsewhere as a Web video/sound piece, made directly from the sound and the Jitter-processed images which accompany it. This Quicktime 7 performance video uses AAC audio compression (44kHz stereo), H.264 video compression (320x230), and has c. 15fps and 200kbits/sec.; it is prepared for HTTP fast-start streaming. It requires the Quicktime 7 plugin, and on Macintosh works best with Safari or Firefox Explorer is no longer supported for Macintosh. The total file size is c. 6MB. You can also access the file directly through this link.)
All of our collaborations have involved different working processes. Sometimes we start with a particular topic or process in mind, but the process is always explorative and the eventual outcome unpredictable. Usually periods of working individually—often successively rather than simultaneously—alternate with discussion. We will now each describe our different roles in this particular collaboration, and the points of intersection between them.
In creating Time, the Magician we made an initial decision that Roger—who would be responsible for the programming and sound component of the piece—would work with Jitter, which we had successfully used for a previous collaboration. I would write the words, and I decided early on that I would like our collaboration to circle around ideas—which interested both Roger and me—about evolution, movement, emergence, and time. We decided that I would first write some text that would then be used as the basis of the piece, but I had no idea at this stage what form the text would take, and whether I would produce one continuous text or a number of textual fragments. In the early stages I read and ‘collaborated with’ a number of different texts, particularly Elizabeth Grosz’s book The Nick of Time. I was interested in the way Grosz sees Darwin’s work as a treatise on difference—she argues that for Darwin there are no clear-cut distinctions between different species and no absolute origin of the species. I was also stimulated by her idea that political resistance is always potential, if latent, in the repressive regimes or social structures of the past.
As I was reading and absorbing the material, I opened a file on my computer and—using a ‘bottom-up’ approach—started to write fragments, sometimes working with the Grosz text as direct trigger. A poem evolved which was a continuous whole but also discontinuous in essence: it consisted of many small fragments that, when glued together and transformed in relation to each other, reverberated though association. This was appropriate, because as the writing process developed I had decided that I would write a poem, but then also disassemble it for the screened version. This way Roger could turn each segment into a module in Jitter, and program the sequence so that the texts would appear in a different order each time.
After I had written the poem we decided on a putative structure for the work: the poem would be performed first, the musical element would start about halfway through, and the screened version—with the fragmented texts—would follow. Roger said that he would video some background material to go behind the texts, but he also suggested that I design the texts as visual objects with coloured letters, different fonts, and free spatial arrangements, as I had in some previous multimedia pieces. So I turned the texts into visual designs: this often resulted in my pulling apart sentences, phrases and words and rearranging them. I then converted the texts files into jpg files and gave them to Roger to work on.
When Hazel gave me her 32 text images, I turned these into a QuickTime video with 10 seconds per image/frame. I also shot a 5 minute ‘background’ video of vegetation and ground, often moving the camera quickly over blurred objects or zooming in very close to them. The video was then edited as a continually moving sequence with an oscillation between clearly defined and abstracted objects, and between artificial and natural ones. The Jitter interface is constructed largely as a sequence of three processing modules. One of these involves continuously changing the way in which two layers (in this case text and background) are mixed; the second, rotation and feedback of segments from one or both layers; and the third a kind of dripping across the image, with feedback, of segments from one or both layers. The interface is performable, in that the timing and sequence can be altered as the piece progresses, and within any one module most of the parameters are available for performer control—this is the essence of what we call ‘hyperimprovisation’ (Dean). Both text and image layers are ‘granulated’: after a randomly variable length of time—between 2 and 20 seconds or so—there is a jump to a randomly chosen new position in the video, and these jumps occur independently for the two layers.
Having established this approach to the image generation, and the overall shape of the piece (as defined above), the remaining aspects were left to the creative choices of the performers. In the Sydney performance both Greg White and I exploited real-time processing of the spoken text by means of the live feed and pre-recorded material. In addition we used long buffers (which contained the present performance of the text) to access the spoken text after Hazel had finished her performed opening segment. I worked on the sound and speech components with some granulation and feedback techniques, throughout, while Greg used a range of other techniques, as well as focusing on the spatial movement of the sound around four loudspeakers surrounding the performance and listening space. Sandy Evans (saxophone)—who was familiar with the overall timeline—improvised freely while viewing the video and listening to our soundscape.
In this first performance, while I drove the sound, the computer ‘posthumanly’ (that is without intervention) drove the image. I worked largely with MSP (Max Signal Processing), a part of the MAX/MSP/Jitter suite of platforms for midi, sound and image, to complement sonically the Jitter-mediated video. So processes of granulation, feedback, spatial rotation (of image) or redistribution (of sound)—as well as re-emergence of objects which had been retained in the memory of the computer—were common to both the sound and image manipulation. There was therefore a degree of algorithmic synaesthesia—that is shared algorithms between image and sound (Dean, Whitelaw, Smith, and Worrall).
The collaborative process involved a range of stimuli: not only with regard to those of process, as discussed, but also in relation to the ideas in the text Hazel provided. The concepts of evolution, movement, and emergence which were important to her writing also informed and influenced the choice of biological and artificial objects in the background video, and the nature and juxtaposition of the processing modules for both sound and image.
If we return to the issues raised at the beginning of this article, we can see how our collaboration does involve the merging of participants and the destabilising of the concept of authorship. The poem was not complete after Hazel had written it—or even after she had dislocated it—but is continually reassembled by the Jitter interface that Roger has constructed. The visual images were also produced first by Hazel, then fused with Roger’s video in continuously changing formations through the Jitter interface. The performance may involve collaboration by several people who were not involved in the original conception of the work, indicating how collaboration can become an extended and accumulative process.
The collaboration also simultaneously superimposes several different kinds of collaborative process, including the intertextual encounter with the Grosz text; the intermedia fusion of text, image and sound; the participation of a number of different people with differentiated roles and varying degrees of input; and collaboration with the computer. It is an assemblage in the terms mentioned earlier: a continuously modulating conjunction of different expertises, media, and machines.
Finally, the collaboration is simultaneously both human and posthuman. It negotiates—in the way Hayles suggests—between pattern, presence, randomness, and absence. On the one hand, it involves human intervention (the writing of the poem, the live music-making, the shooting of the video, the discussion between participants) though sometimes those interventions are hidden, merged, or subsumed. On the other hand, the Jitter interface allows for both tight programming and elements of variability and unpredictability. In this way the collaboration displaces the autonomous subject with what Hayles calls a ‘distributed system’ (Hayles 290). The consequence is that the collaborative process never reaches an endpoint: the computer interface will construct the piece differently each time, we may choose to interact with it in performance, and the sound performance will always contain many improvised and unpredictable elements. The collaborative process, like the work it produces, is ongoing, emergent, and mutating.