Kathy works entirely online. She is an indefatigable worker and is never too engrossed with her own pursuits to deny another’s request for assistance. Her expertise is focused, and her suggestions are generally valuable. She constantly reviews her communications to search for ways of increasing her effectiveness. An analysis of her interactions, however, raises concerns. Approximately 7% of the communications Kathy receives are insulting and nearly 20% are sexual in nature (Brahnam). She is frequently called a bitch and told her ideas are stupid. Although Kathy refuses to talk about sex, her comments are often twisted and given unintended sexual significance.
Why is Kathy bombarded by so many verbal assaults? Could part of the reason be that her communications are electronically mediated and this encourages what Suler calls toxic disinhibition, i.e., behaviour that is characterised by an acting out of forbidden desires and an unrestrained expression of anger and hatred? Is her job performance to blame for some of the insults? An examination of her interactions reveals that Kathy occasionally has difficulty understanding requests and often uses incorrect and sub-standard grammar. Is the prevalence of foul language due to the fact that Kathy is young and female? If she were older and male—or androgynous—would her colleagues respect her more? Or is this barrage of electronic nastiness a natural consequence—simply the way people will behave when asked to work with human-like computing machines?
Embodied Collaborative Agents
Amer. Dr Poole, what’s it like living for the better part of a year in such close proximity with HAL?
Poole. Well, it’s pretty close to what you said about him earlier, he is just like a sixth member of the crew—very quickly get adjusted to the idea that he talks, and you think of him—uh—really just as another person.
Kubrick and Clarke, 2001: A Space Odyssey
For over a century, science fiction has painted vivid pictures of what it would be like to work alongside computers. Although many a tale ends with computers taking over the world, depictions of collegial relationships between human beings and their artificial helpmates are equally familiar.
This amiable vision of human-computer interaction is what motivates much current research into embodied collaborative agents. These are programs, like Kathy, that run independently of user control, that look and behave like people, and that are designed to assist users in solving complex problems and in performing complicated tasks. For these agents to succeed, they must be socially intelligent, capable of building and sustaining friendly working relationships, and competent in what they do.
Researchers are aware that building long-term human-computer relationships is difficult (Bickmore and Picard) and that users are often hostile towards interactive agents (Angeli et al.). These problems are often blamed on technological limitations that irritate the user and disrupt the user’s suspension of disbelief. Users seem to demand a higher degree of fidelity when dealing with anthropomorphic interfaces. It is assumed that once these technological issues are resolved, the social cues exhibited by the agents will automatically call forth socially appropriate responses.
The assumption that people will behave nicely when given a believable interface is largely based on the media equation, or the idea that people treat media the same way they treat people (Reeves and Nass). The media equation claims the same rules governing interpersonal relationships apply to human-computer relationships. If it is impolite to criticise a person too harshly face-to-face, for instance, then it follows people will soften their evaluations of a computer’s performance when in the presence of the computer. Research demonstrates, in fact, that people do apply this rule, as well as many other social rules, in their dealings with computers.
There are situations, however, where the media equation fails. This is particularly evident in situations involving abusive behavior. Bartneck et al., in their repetition of the Milgram obedience experiment, for example, found that subjects had no qualms administering shock to a rather cute humanoid robot placed in an electric chair. No matter how loudly the robot yelped and pleaded for mercy when zapped, subjects remained uniformly marble-hearted in obeying the directive of the experimenter to administer yet more electricity. Clearly the subjects in this experiment were fully aware the robot was not a person.
Rather than attempting to understand human-computer interaction through the filter of the media equation, or social theory, it might be more profitable to investigate theories, such as animism, anthropomorphism, personification, and semiotics, which explain how human beings relate to things. In the next section, I argue that an anthropomorphic tension is at odds with the suspension of disbelief, at least when dealing with animated agents, and that this tension provides a motivating ground for abusing agents. If this proves correct, it may be the case that users will deride and abuse collaborative agents no matter how veridical the interface.
Anthropomorphic Tension
People in the modern world are pulled in two directions when confronted with things. On the one hand, there is the tendency to anthropomorphize, i.e., to attribute humanlike qualities to non-human entities. Possibly because of its evolutionary value (failing to perceive a human being hidden in the trees could prove deadly), anthropomorphism is a constant perceptual bias, a sort of cognitive default (Guthrie; Caporael and Heyes). On the other hand, there is strong societal pressure, especially in the West, to banish the anthropomorphic for the sake of objectivity (Davis; Spada). Anthropomorphic thinking is considered archaic and primitive (Fisher; Caporael). Children are allowed to indulge in it, but, adults, in general, are expected to maintain a clear demarcation between self and the world. As Guthrie notes, “Once we decide a perception is anthropomorphic, reason dictates that we correct it” (76).
It is interesting to note how children learn to discard anthropomorphic thinking. One way apparently involves torturing cherished playthings. A recent study conducted at the University of Bath discovered that young girls like mutilating and torturing Barbie. According to the researchers, “the girls we spoke to see Barbie torture as a legitimate play activity … The types of mutilation are varied and creative, and range from removing the hair to decapitation, burning, breaking, and even microwaving” (Radford). Why is Barbie tortured? The researchers observed that many of these girls see Barbie as a childish plaything. They go on to explain that “On a deeper level, Barbie has become inanimate. She has lost any individual warmth that she might possess if she were perceived as a singular person” (Radford). In other words, by dehumanizing the very things they once animated, the little girls were simply learning to become objective grownups.
Although anthropomorphic thinking begins in early childhood, it is never completely outgrown but rather pervades adult thinking, with much of it remaining unconscious, even in scientific thinking (Searle). It is not clear what strategies people employ to keep the anthropomorphic tendency in check. Anthropomorphism generates little scholarly attention. As Guthrie notes, “that such an important and oft-noted tendency should bring so little close scrutiny is a curiosity with several apparent causes. One is simply that it appears as an embarrassment, an irrational aberration of thought of dubious parentage, that is better chastened and closeted than publicly scrutinized” (53-54).
The tension produced between the tendency to anthropomorphise and the societal pressure to remain objective has implications for human-computer interaction. First, the anthropomorphic tension jeopardizes the credibility and trustworthiness of the interactive agent. If the user’s relationship to the collaborative agent is based on a dubious, even embarrassing, mode of cognition, as Guthrie puts it, then the relationship with the agent in many workplace contexts will remain suspect.
Second, the anthropomorphic tension motivates abuse and exposes the agent. The agent, as illustrated in the diagram below, is situated between the tendency to anthropomorphise and the pressure to objectify. Anthropomorphism animates the agent, resulting in the desired suspension of disbelief. Developers of human-like interfaces rely on this impulse and work to strengthen it by making the technology transparent. Although improved technology will certainly improve believability, the pressure to objectify will most likely succeed in periodically disrupting the suspension of disbelief.
Anthropomorphic tension and the collaborative agent
What happens to the agent when believability is disrupted? Examination of user/agent interaction logs shows that the agent becomes transparent or displaced to some degree. What slips behind the agent (lowly machine, programmer/creator, organization/owner, the social stereotypes evoked by the agent’s embodiment and so on) is then often subjected to a barrage of verbal abuse. The agent provides users an opportunity to express opinions and indulge in behaviours normally prohibited in the workplace. This abuse occurs in a socially and psychologically safe space, since in truth the agent is an insensate object and the user is talking to no one real.
Thus, when it comes to collaborating with Kathy, users may find it far more gratifying to treat her, not as a valuable co-worker or “just another member of the crew,” but rather as a fun thing to bash. And although the organisation may disapprove the waste of time, society at large will find it hard, without reverting to anthropomorphic thinking, to knock it.