Subscriptions



Dr Aycock's Bad Idea

Is the Good Use of Computer Viruses Still a Bad Idea?

1Following the deep-seated analogy between biological and computer parasites, it is surely inconceivable that anyone would want to deliberately infect a computer. It’s a bad idea, right? Well, not necessarily. It seems that the University of Calgary (UoC) want to challenge the received wisdom of security experts—a judgment, which determines that there is no such thing as a good virus. The UoC wants to encourage their students to write and test malevolent viruses. Still following the biological analogy, Dr John Aycock, the academic who runs the program at UoC, likens the approach to ‘what medical researchers do to combat the latest biological viruses such as Sars’. He argues that ‘before you can develop a cure, you have to understand what the virus is and how it spreads and what motivates those who write malicious software’ (Fried). The reaction from security experts is not surprisingly one of dismay—for them, all viruses are bad.

2Nonetheless, it is Dr. Aycock’s provocation that may provide a much-needed alternative solution to one of the biggest problems facing the network society. As many affiliates of this composite society are increasingly discovering, the network is a present day communication paradox. It is a vast, fast, and efficient logic machine, but simultaneously it provides the perfect medium for viral contagion. Moreover, despite the efforts of a billion dollar anti-virus industry, current reactive solutions are clearly not working. A report in the UK (DTI) concludes that despite the considerable uptake of anti-virus software—93% of UK companies have anti-virus software—70% of all security breaches are from viral-like programs. (The DTI report claims that ‘two-thirds of organisations that had any security incident said that a virus infection was their worst one’. In comparison, a 1991 Gallup survey [in Louw and Duffy] showed that of 500 of the UK’s largest businesses 24% had experienced a viral attack.) Viruses, it seems, are progressively more capable of ‘bypassing traditional anti-virus software and targeting vulnerabilities’. However, Dr Aycock argues that academics should not bury their heads in the sand. They should openly recognise that ‘reacting to the virus is simply not working’ and instead support pro-active research into the creation of computer viruses. Within the bad idea itself there maybe a good solution. Naturally, the experts are outraged by what they perceive as an incursion beyond the ethical norms of the computer world.

3These recent events are part of an ongoing good virus/bad virus debate. Network controllers have long argued for the ethical containment of viral code. Unlocking the secrets of the virus writer is, according to the anti-virus community, a bad idea. In the early 1980s, when Fred Cohen began experimenting with self-replicating code as part of his PhD, he experienced the moral indignation of the computer community. Cohen’s viral research at the University of Southern California (USC) referenced von Neumann’s seminal work on cellular automata (1948) and the Darwinian computer games played out in the Bell Labs in the 1950s and 1960s (Dewdney). Cohen was working on a similar, but simplified, idea, a ‘program’ that could insert itself into other programs and assume control of them. In doing so, he quickly realised the potential problem of the computer virus.

4

I’d been working on computer security for a long time – I knew how systems worked, and how different attacks worked… But it came over me. Anyone who writes one of these things would have something that could replicate everywhere (Spiller 172).

5After seeing the results of his experiments, network controllers at USC banned him from repeating any similar exercises. Moreover, after completing his thesis in 1985, he could not get it published in a journal until 1987 (Spiller 176; it was finally published in the journal Computers and Security), and suffered a ‘virtual lockout’ in the funding of further research. Cohen later refers to the ‘apparent fear reaction’ as a result of trying to solve technical problems with policy solutions. At the time, Cohen used the same biological analogy as Dr. Aycock to defend his research into computer viruses.

6

The benefits of biological research on the quality of life is indisputable, and the benefits of computer virus research may some day pay off in the quality of our information systems, and by extension, our well being. (Cohen in Trends in Computer Virus Research)

7In the early 1990s, the network seemed to be a more open-minded society. Cohen was able to consider computer viruses in terms of the legitimacy of friendly contagion. The so-called benevolent virus appeared in his book A Short Course on Computer Viruses (Cohen 15). It was conceived of as a viral alternative to Turing logic. At the same time, physicist Mark Ludwig, driven by his desire to make technical information about computer viruses freely available, published the Black Book of Computer Viruses. Wired Magazine championed Ludwig’s ‘gruellingly meticulous analyses of viral performance and technique’ (Dibbell). In 1995, Tom Ray, a biologist turned computer programmer, created the viral-like Tierra program, an evolutionary race between digital hosts and parasites. Ray proposed that Tierra should exist in ‘a very large, complex and inter-connected region of cyberspace… inoculated with digital organisms, which will be allowed to evolve freely through natural selection’ (Ray)—ironically, something similar to what we are currently experiencing.

8In the early days, viral researchers were prompted to defend their work. In an interview in the virus webzine Alive in 1994, Cohen argued that a ‘symbol sequences without any known malicious side effects’ could not be considered as a bad idea. Ludwig contended that people were ‘brainwashed into believing that virus = bad…’ (Dibbell). However, more foreboding voices soon joined the debate. Spafford warned that while there is legitimate ‘scientific interest’ in viruses as a ‘means of modelling life’ and developing epidemiological defences, fellow researches should heed the dangers of further experimentation. True viruses are inherently unethical. For Spafford, the idea of a ‘good virus’ is an oxymoron.

9Following the exponential growth in malicious attacks in the mid-1990s, the idea of a good virus drifted considerably from the centre to the margins of the network society. In 1996, the IBM anti-virus researcher Sarah Gordon criticised Ludwig for elevating the status of the computer virus from the digital equivalent of a can of spray paint. With estimated costs to the worldwide Information Technolgy industry of $13 billion in 2001 (Pipkin 41) and the destructive force of a single worm costing tens of millions of dollars, not surprisingly the word ‘virus’ has developed a negative connotation.  Even Cohen has realised that any acceptance of the benevolent virus would require considerable linguistic embellishment.

10

Try ‘intelligent agents’, ‘artificial life’, ‘adaptive distributed networks’, and similar names and you will be far more successful. (Fed Cohen’s response to email questions posed by the author in June 2002)

11Within this heated climate, it was highly probable that Dr Aycock would stand accused of peddling a bad idea. Graham Cluley, a consultant for Sophos, rhetorically questions UoC’s ethics by asking, ‘should we teach kids how to break into cars if they’re interested in becoming a policeman one day?’ (Kelly). The anti-virus experts argue that by teaching how to ‘attack and destroy’ rather than ‘prevention, protection, and cure’, UoC will simply encourage the widespread contagion of the bad idea. However, UoC questions the naivety of this expert opinion. They argue that any ‘reasonably intelligent individual’ can access this information without attending university for four years. They claim it is ‘dangerous to think that virus writers can be stopped without a better understanding of how they operate.’

12Maybe UoC are doing what academia does best. They are considering the virus in a new and unfamiliar light, clearing away ethical baggage, and crossing the moral boundaries of the network society. Deep-seated as it is, the analogy only goes so far. The network and the virus writer have developed their own biology, which is both technologically and culturally shaped. The search for a viral cure has to move away from the reactionary dissection of existing viral anatomies. Researchers need to look towards a pro-active engineering model that incorporates the complex human-computer assemblage. As one maverick expert suggests:

13

Tomorrow’s experts need to learn to think beyond and develop better applications and operating systems that proactively block potential attack vectors rather than waiting to be attacked and then responding (a ‘security expert’ discussing the UoC programme in http://www.tla.ch/TLA/NEWS/2004sec/20040914Writting-Viruses.htm, 14 Sep. 2004)

14While many other types of furtive program, like ‘bots’, ‘crawlers’, and ‘spiders’ legitimately creep behind our screens, the virus is seen as a digital pariah. Whether or not the viral algorithm is benevolent or malevolent doesn’t seem to matter any more. The vast majority of the network society regards it as a bad idea. Nevertheless, Dr Aycock’s experiment with both the cultural and technological elements could produce a pro-active immunisation program. Whatever the conclusion, he should be applauded for attempting to carry out this experiment while beleaguered by so many experts who decide to judge innovation in terms of rigid moral outcomes.