The Convergence Effect: Real and Virtual Encounters in Augmented Reality Art

Horea Avram


Augmented Reality—The Liminal Zone

Within the larger context of the post-desktop technological philosophy and practice, an increasing number of efforts are directed towards finding solutions for integrating as close as possible virtual information into specific real environments; a short list of such endeavors include Wi-Fi connectivity, GPS-driven navigation, mobile phones, GIS (Geographic Information System), and various technological systems associated with what is loosely called locative, ubiquitous and pervasive computing. Augmented Reality (AR) is directly related to these technologies, although its visualization capabilities and the experience it provides assure it a particular place within this general trend. Indeed, AR stands out for its unique capacity (or ambition) to offer a seamless combination—or what I call here an effect of convergence—of the real scene perceived by the user with virtual information overlaid on that scene interactively and in real time. The augmented scene is perceived by the viewer through the use of different displays, the most common being the AR glasses (head-mounted display), video projections or monitors, and hand-held mobile devices such as smartphones or tablets, increasingly popular nowadays. One typical example of AR application is Layar, a browser that layers information of public interest—delivered through an open-source content management system—over the actual image of a real space, streamed live on the mobile phone display. An increasing number of artists employ this type of mobile AR apps to create artworks that consist in perceptually combining material reality and virtual data: as the user points the smartphone or tablet to a specific place, virtual 3D-modelled graphics or videos appear in real time, seamlessly inserted in the image of that location, according to the user’s position and orientation.

In the engineering and IT design fields, one of the first researchers to articulate a coherent conceptualization of AR and to underlie its specific capabilities is Ronald Azuma. He writes that, unlike Virtual Reality (VR) which completely immerses the user inside a synthetic environment, AR supplements reality, therefore enhancing “a user’s perception of and interaction with the real world” (355-385). Another important contributor to the foundation of AR as a concept and as a research field is industrial engineer Paul Milgram. He proposes a comprehensive and frequently cited definition of “Mixed Reality” (MR) via a schema that includes the entire spectrum of situations that span the “continuum” between actual reality and virtual reality, with “augmented reality” and “augmented virtuality” between the two poles (283).

Important to remark with regard to terminology (MR or AR) is that especially in the non-scientific literature, authors do not always explain a preference for either MR or AR. This suggests that the two terms are understood as synonymous, but it also provides evidence for my argument that, outside of the technical literature, AR is considered a concept rather than a technology. Here, I use the term AR instead of MR considering that the phrase AR (and the integrated idea of augmentation) is better suited to capturing the convergence effect. As I will demonstrate in the following lines, the process of augmentation (i.e. the convergence effect) is the result of an enhancement of the possibilities to perceive and understand the world—through adding data that augment the perception of reality—and not simply the product of a mix. Nevertheless, there is surely something “mixed” about this experience, at least for the fact that it combines reality and virtuality.

The experiential result of combining reality and virtuality in the AR process is what media theorist Lev Manovich calls an “augmented space,” a perceptual liminal zone which he defines as “the physical space overlaid with dynamically changing information, multimedia in form and localized for each user” (219). The author derives the term “augmented space” from the term AR (already established in the scientific literature), but he sees AR, and implicitly augmented space, not as a strictly defined technology, but as a model of visuality concerned with the intertwining of the real and virtual: “it is crucial to see this as a conceptual rather than just a technological issue – and therefore as something that in part has already been an element of other architectural and artistic paradigms” (225-6). Surely, it is hard to believe that AR has appeared in a void or that its emergence is strictly related to certain advances in technological research. AR—as an artistic manifestation—is informed by other attempts (not necessarily digital) to merge real and fictional in a unitary perceptual entity, particularly by installation art and Virtual Reality (VR) environments.

With installation art, AR shares the same spatial strategy and scenographic approach—they both construct “fictional” areas within material reality, that is, a sort of mise-en-scène that are aesthetically and socially produced and centered on the active viewer. From the media installationist practice of the previous decades, AR inherited the way of establishing a closer spatio-temporal interaction between the setting, the body and the electronic image (see for example Bruce Nauman’s Live-Taped Video Corridor [1970], Peter Campus’s Interface [1972], Dan Graham’s Present Continuous Pasts(s) [1974], Jeffrey Shaw’s Viewpoint [1975], or Jim Campbell’s Hallucination [1988]).

On the other hand, VR plays an important role in the genealogy of AR for sharing the same preoccupation for illusionist imagery and—at least in some AR projects—for providing immersive interactions in “expanded image spaces experienced polysensorily and interactively” (Grau 9). VR artworks such as Paul Sermon, Telematic Dreaming (1992), Char Davies’ Osmose (1995), Michael Naimark’s Be Now Here (1995-97), Maurice Benayoun’s World Skin: A Photo Safari in the Land of War (1997), Luc Courchesne’s Where Are You? (2007-10), are significant examples for the way in which the viewer can be immersed in “expanded image-spaces.” Offering no view of the exterior world, the works try instead to reduce as much as possible the critical distance the viewer might have to the image he/she experiences.

Indeed, AR emerged in great part from the artistic and scientific research efforts dedicated to VR, but also from the technological and artistic investigations of the possibilities of blending reality and virtuality, conducted in the previous decades. For example, in the 1960s, computer scientist Ivan Sutherland played a crucial role in the history of AR contributing to the development of display solutions and tracking systems that permit a better immersion within the digital image. Another important figure in the history of AR is computer artist Myron Krueger whose experiments with “responsive environments” are fundamental as they proposed a closer interaction between participant’s body and the digital object. More recently, architect and theorist Marcos Novak contributed to the development of the idea of AR by introducing the concept of “eversion”, “the counter-vector of the virtual leaking out into the actual”.

Today, AR technological research and the applications made available by various developers and artists are focused more and more on mobility and ubiquitous access to information instead of immersivity and illusionist effects. A few examples of mobile AR include applications such as Layar, Wikitude—“world browsers” that overlay site-specific information in real-time on a real view (video stream) of a place, Streetmuseum (launched in 2010) and Historypin (launched in 2011)—applications that insert archive images into the street-view of a specific location where the old images were taken, or Google Glass (launched in 2012)—a device that provides the wearer access to Google’s key Cloud features, in situ and in real time.

Recognizing the importance of various technological developments and of the artistic manifestations such as installation art and VR as predecessors of AR, we should emphasize that AR moves forward from these artistic and technological models. AR extends the installationist precedent by proposing a consistent and seamless integration of informational elements with the very physical space of the spectator, and at the same time rejects the idea of segregating the viewer into a complete artificial environment like in VR systems by opening the perceptual field to the surrounding environment. Instead of leaving the viewer in a sort of epistemological “lust” within the closed limits of the immersive virtual systems, AR sees virtuality rather as a “component of experiencing the real” (Farman 22). Thus, the questions that arise—and which this essay aims to answer—are: Do we have a specific spatial dimension in AR? If yes, can we distinguish it as a different—if not new—spatial and aesthetic paradigm? Is AR’s intricate topology able to be the place not only of convergence, but also of possible tensions between its real and virtual components, between the ideal of obtaining a perceptual continuity and the inherent (technical) limitations that undermine that ideal?

Converging Spaces in the Artistic Mode: Between Continuum and Discontinuum

As key examples of the way in which AR creates a specific spatial experience—in which convergence appears as a fluctuation between continuity and discontinuity—I mention three of the most accomplished works in the field that, significantly, expose also the essential role played by the interface in providing this experience: Living-Room 2 (2007) by Jan Torpus, Under Scan (2005-2008) by Rafael Lozano-Hemmer and Hans RichtAR (2013) by John Craig Freeman and Will Pappenheimer. The works illustrate the three main categories of interfaces used for AR experience: head-attached, spatial displays, and hand-held (Bimber 2005). These types of interface—together with all the array of adjacent devices, software and tracking systems—play a central role in determining the forms and outcomes of the user’s experience and consequently inform in a certain measure the aesthetic and socio-cultural interpretative discourse surrounding AR. Indeed, it is not the same to have an immersive but solitary experience, or a mobile and public experience of an AR artwork or application.

The first example is Living-Room 2 an immersive AR installation realized by a collective coordinated by Jan Torpus in 2007 at the University of Applied Sciences and Arts FHNW, Basel, Switzerland. The work consists of a built “living-room” with pieces of furniture and domestic objects that are perceptually augmented by means of a “see-through” Head Mounted Display. The viewer perceives at the same time the real room and a series of virtual graphics superimposed on it such as illusionist natural vistas that “erase” the walls, or strange creatures that “invade” the living-room. The user can select different augmenting “scenarios” by interacting with both the physical interfaces (the real furniture and objects) and the graphical interfaces (provided as virtual images in the visual field of the viewer, and activated via a handheld device). For example, in one of the scenarios proposed, the user is prompted to design his/her own extended living room, by augmenting the content and the context of the given real space with different “spatial dramaturgies” or “AR décors.” Another scenario offers the possibility of creating an “Ecosystem”—a real-digital world perceived through the HMD in which strange creatures virtually occupy the living-room intertwining with the physical configuration of the set design and with the user’s viewing direction, body movement, and gestures. Particular attention is paid to the participant’s position in the room: a tracking device measures the coordinates of the participant’s location and direction of view and effectuates occlusions of real space and then congruent superimpositions of 3D images upon it.

Jan Torpus - living-room 2, ecosystems

Figure 1: Jan Torpus, Living-Room 2 (Ecosystems), Augmented Reality installation (2007). Courtesy of the artist.

Jan Torpus, Living-Room 2, (AR decors), Augmented Reality installation (2007). Courtesy of the artist.

Figure 2: Jan Torpus, Living-Room 2 (AR decors), Augmented Reality installation (2007). Courtesy of the artist.

In this sense, the title of the work acquires a double meaning: “living” is both descriptive and metaphoric. As Torpus explains, Living-Room is an ambiguous phrase: it can be both a living-room and a room that actually lives, an observation that suggests the idea of a continuum and of immersion in an environment where there are no apparent ruptures between reality and virtuality. Of course, immersion is in these circumstances not about the creation of a purely artificial secluded space of experience like that of the VR environments, but rather about a dialogical exercise that unifies two different phenomenal levels, real and virtual, within a (dis)continuous environment (with the prefix “dis” as a necessary provision). Media theorist Ron Burnett’s observations about the instability of the dividing line between different levels of experience—more exactly, of the real-virtual continuum—in what he calls immersive “image-worlds” have a particular relevance in this context:

Viewing or being immersed in images extend the control humans have over mediated spaces and is part of a perceptual and psychological continuum of struggle for meaning within image-worlds. Thinking in terms of continuums lessens the distinctions between subjects and objects and makes it possible to examine modes of influence among a variety of connected experiences. (113)

It is precisely this preoccupation to lessen any (or most) distinctions between subjects and objects, and between real and virtual spaces, that lays at the core of every artistic experiment under the AR rubric. The fact that this distinction is never entirely erased—as Living-Room 2 proves—is part of the very condition of AR. The ambition to create a continuum is after all not about producing perfectly homogenous spaces, but, as Ron Burnett points out (113), “about modalities of interaction and dialogue” between real worlds and virtual images.

Another way to frame the same problematic of creating a provisional spatial continuum between reality and virtuality, but this time in a non-immersive fashion (i.e. with projective interface means), occurs in Rafael Lozano-Hemmer’s Under Scan (2005-2008). The work, part of the larger series Relational Architecture, is an interactive video installation conceived for outdoor and indoor environments and presented in various public spaces. It is a complex system comprised of a powerful light source, video projectors, computers, and a tracking device. The powerful light casts shadows of passers-by within the dark environment of the work’s setting. A tracking device indicates where viewers are positioned and permits the system to project different video sequences onto their shadows. Shot in advance by local videographers and producers, the filmed sequences show full images of ordinary people moving freely, but also watching the camera. As they appear within pedestrians’ shadows, the figurants interact with the viewers, moving and establishing eye contact.

3 Rafael Lozano-Hemmer, Under Scan, (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist.

Figure 3: Rafael Lozano-Hemmer, Under Scan (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist.

4 Rafael Lozano-Hemmer, Under Scan, (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist.

Figure 4: Rafael Lozano-Hemmer, Under Scan (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist.

One of the most interesting attributes of this work with respect to the question of AR’s (im)possible perceptual spatial continuity is its ability to create an experientially stimulating and conceptually sophisticated play between illusion and subversion of illusion. In Under Scan, the integration of video projections into the real environment via the active body of the viewer is aimed at tempering as much as possible any disparities or dialectical tensions—that is, any successive or alternative reading—between real and virtual. Although non-immersive, the work fuses the two levels by provoking an intimate but mute dialogue between the real, present body of the viewer and the virtual, absent body of the figurant via the ambiguous entity of the shadow. The latter is an illusion (it marks the presence of a body) that is transcended by another illusion (video projection). Moreover, being “under scan,” the viewer inhabits both the “here” of the immediate space and the “there” of virtual information: “the body” is equally a presence in flesh and bones and an occurrence in bits and bytes. But, however convincing this reality-virtuality pseudo-continuum would be, the spatial and temporal fragmentations inevitably persist: there is always a certain break at the phenomenological level between the experience of real space, the bodily absence/presence in the shadow, and the displacements and delays of the video image projection.

John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists.

Figure 5: John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists.

John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists.

Figure 6: John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists.

The third example of an AR artwork that engages the problem of real-virtual spatial convergence as a play between perceptual continuity and discontinuity, this time with the use of hand-held mobile interface is Hans RichtAR by John Craig Freeman and Will Pappenheimer. The work is an AR installation included in the exhibition “Hans Richter: Encounters” at Los Angeles County Museum of Art, in 2013. The project recreates the spirit of the 1929 exhibition held in Stuttgart entitled Film und Foto (“FiFo”) for which avant-garde artist Hans Richter served as film curator. Featured in the augmented reality is a re-imaging of the FiFo Russian Room designed by El Lissitzky where a selection of Russian photographs, film stills and actual film footage was presented. The users access the work through tablets made available at the exhibition entrance. Pointing the tablet at the exhibition and moving around the room, the viewer discovers that a new, complex installation is superimposed on the screen over the existing installation and gallery space at LACMA. The work effectively recreates and interprets the original design of the Russian Room, with its scaffoldings and surfaces at various heights while virtually juxtaposing photography and moving images, to which the authors have added some creative elements of their own. Manipulating and converging real space and the virtual forms in an illusionist way, AR is able—as one of the artists maintains—to destabilize the way we construct representation. Indeed, the work makes a statement about visuality that complicates the relationship between the visible object and its representation and interpretation in the virtual realm. One that actually shows the fragility of establishing an illusionist continuum, of a perfect convergence between reality and represented virtuality, whatever the means employed.

AR: A Different Spatial Practice

Regardless the degree of “perfection” the convergence process would entail, what we can safely assume—following the examples above—is that the complex nature of AR operations permits a closer integration of virtual images within real space, one that, I argue, constitutes a new spatial paradigm. This is the perceptual outcome of the convergence effect, that is, the process and the product of consolidating different—and differently situated—elements in real and virtual worlds into a single space-image. Of course, illusion plays a crucial role as it makes permeable the perceptual limit between the represented objects and the material spaces we inhabit. Making the interface transparent—in both proper and figurative senses—and integrating it into the surrounding space, AR “erases” the medium with the effect of suspending—at least for a limited time—the perceptual (but not ontological!) differences between what is real and what is represented.

These aspects are what distinguish AR from other technological and artistic endeavors that aim at creating more inclusive spaces of interaction. However, unlike the CAVE experience (a display solution frequently used in VR applications) that isolates the viewer within the image-space, in AR virtual information is coextensive with reality. As the example of the Living-Room 2 shows, regardless the degree of immersivity, in AR there is no such thing as dismissing the real in favor of an ideal view of a perfect and completely controllable artificial environment like in VR. The “redemptive” vision of a total virtual environment is replaced in AR with the open solution of sharing physical and digital realities in the same sensorial and spatial configuration. In AR the real is not denounced but reflected; it is not excluded, but integrated.

Yet, AR distinguishes itself also from other projects that presuppose a real-world environment overlaid with data, such as urban surfaces covered with screens, Wi-Fi enabled areas, or video installations that are not site-specific and viewer inclusive. Although closely related to these types of projects, AR remains different, its spatiality is not simply a “space of interaction” that connects, but instead it integrates real and virtual elements. Unlike other non-AR media installations, AR does not only place the real and virtual spaces in an adjacent position (or replace one with another), but makes them perceptually convergent in an—ideally—seamless way (and here Hans RichtAR is a relevant example).

Moreover, as Lev Manovich notes, “electronically augmented space is unique – since the information is personalized for every user, it can change dynamically over time, and it is delivered through an interactive multimedia interface” (225-6). Nevertheless, as our examples show, any AR experience is negotiated in the user-machine encounter with various degrees of success and sustainability. Indeed, the realization of the convergence effect is sometimes problematic since AR is never perfectly continuous, spatially or temporally. The convergence effect is the momentary appearance of continuity that will never take full effect for the viewer, given the internal (perhaps inherent?) tensions between the ideal of seamlessness and the mostly technical inconsistencies in the visual construction of the pieces (such as real-time inadequacy or real-virtual registration errors). We should note that many criticisms of the AR visualization systems (being them practical applications or artworks) are directed to this particular aspect related to the imperfect alignment between reality and digital information in the augmented space-image. However, not only AR applications can function when having an estimated (and acceptable) registration error, but, I would state, such visual imperfections testify a distinctive aesthetic aspect of AR. The alleged flaws can be assumed—especially in the artistic AR projects—as the “trace,” as the “tool’s stroke” that can reflect the unique play between illusion and its subversion, between transparency of the medium and its reflexive strategy. In fact this is what defines AR as a different perceptual paradigm: the creation of a convergent space—which will remain inevitably imperfect—between material reality and virtual information.


Azuma, Ronald T. “A Survey on Augmented Reality.” Presence: Teleoperators and Virtual Environments 6.4 (Aug. 1997): 355-385. < >.

Benayoun, Maurice. World Skin: A Photo Safari in the Land of War. 1997. Immersive installation: CAVE, Computer, video projectors, 1 to 5 real photo cameras, 2 to 6 magnetic or infrared trackers, shutter glasses, audio-system, Internet connection, color printer. Maurice Benayoun, Works. < >.

Bimber, Oliver, and Ramesh Raskar. Spatial Augmented Reality. Merging Real and Virtual Worlds. Wellesley, Massachusetts: AK Peters, 2005. 71-92.

Burnett, Ron. How Images Think. Cambridge, Mass.: MIT Press, 2004.

Campbell, Jim. Hallucination. 1988-1990. Black and white video camera, 50 inch rear projection video monitor, laser disc players, custom electronics. Collection of Don Fisher, San Francisco.

Campus, Peter. Interface. 1972. Closed-circuit video installation, black and white camera, video projector, light projector, glass sheet, empty, dark room. Centre Georges Pompidou Collection, Paris, France.

Courchesne, Luc. Where Are You? 2005. Immersive installation: Panoscope 360°. a single channel immersive display, a large inverted dome, a hemispheric lens and projector, a computer and a surround sound system. Collection of the artist. < >.

Davies, Char. Osmose. 1995. Computer, sound synthesizers and processors, stereoscopic head-mounted display with 3D localized sound, breathing/balance interface vest, motion capture devices, video projectors, and silhouette screen. Char Davies, Immersence, Osmose. < >.

Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.

Graham, Dan. Present Continuous Past(s). 1974. Closed-circuit video installation, black and white camera, one black and white monitor, two mirrors, microprocessor. Centre Georges Pompidou Collection, Paris, France.

Grau, Oliver. Virtual Art: From Illusion to Immersion. Translated by Gloria Custance. Cambridge, Massachusetts, London: MIT Press, 2003.

Hansen, Mark B.N. New Philosophy for New Media. Cambridge, Mass.: MIT Press, 2004.

Harper, Douglas. Online Etymology Dictionary, 2001-2012. < >.

Manovich, Lev. “The Poetics of Augmented Space.” Visual Communication 5.2 (2006): 219-240.

Milgram, Paul, Haruo Takemura, Akira Utsumi, Fumio Kishino. “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.” SPIE [The International Society for Optical Engineering] Proceedings 2351: Telemanipulator and Telepresence Technologies (1994): 282-292.

Naimark, Michael, Be Now Here. 1995-97. Stereoscopic interactive panorama: 3-D glasses, two 35mm motion-picture cameras, rotating tripod, input pedestal, stereoscopic projection screen, four-channel audio, 16-foot (4.87 m) rotating floor. Originally produced at Interval Research Corporation with additional support from the UNESCO World Heritage Centre, Paris, France. < >.

Nauman, Bruce. Live-Taped Video Corridor. 1970. Wallboard, video camera, two video monitors, videotape player, and videotape, dimensions variable. Solomon R. Guggenheim Museum, New York.

Novak, Marcos. Interview with Leo Gullbring, Calimero journalistic och fotografi, 2001. < >.

Sermon, Paul. Telematic Dreaming. 1992. ISDN telematic installation, two video projectors, two video cameras, two beds set. The National Museum of Photography, Film & Television in Bradford England. 

Shaw, Jeffrey, and Theo Botschuijver. Viewpoint. 1975. Photo installation. Shown at 9th Biennale de Paris, Musée d'Art Moderne, Paris, France.


Augmented Reality, space, representation, visuality, spectatorship

Copyright (c) 2013 Horea Avram

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

  • M/C - Media and Culture
  • Supported by QUT - Creative Industries
  • Copyright © M/C, 1998-2016
  • ISSN 1441-2616