The year is 1989.
You sit inside a small uptown courthouse.
You’re a journalist for the New York Times.
Outside, the city moves as it always does.
You feel the hum of traffic through the wooden bench.
Footsteps echo off wet pavement.
The sound reverberates through single-glazed windows.
A faint breeze cuts the air after a week of relentless rain.
To everyone else, March 7th feels ordinary.
But inside this courtroom, something unprecedented is unfolding.
Seven years earlier, William Gibson publishes Neuromancer.
He predicts a future of digital prophets and cyber cowboys.
A future where crime escapes the physical world.
A future where theft and blackmail live in the web.
Today, that future is on trial.
In this courtroom, Robert Morris becomes the world’s first indicted hacker.
Morris has just shaped the internet forever.
Roll the clock back to 1971. In a quiet office at BBN Technologies, a 30-year-old programmer named Ray Tomlinson is tinkering with a piece of experimental software written by a colleague. Tomlinson had already shaped the internet by this point of his life. He’d spent the previous year combining the preexisting network file sharing software he’d developed, titled CPYNET, with SNDMSG, an instant messenger service designed to send messages to other users on the same shared computer. By taking the network protocol of CPYNET, and the functionality of SNDMSG, he has just sent a message from one computer to another through the network for the first time in history. This is the birth of email, the most common form of computer virus transmission. However, email was not Tomlinsons most prolific contribution to cybercrime. The piece of software which Tomlinson is playing around with, is codenamed Creeper.
BBN Technologies operated using ARPANET, an experimental network which connects government and university research computers together across the united states using packet sharing, this is essentially the earliest form of the internet. An academics playground. ARPANET connects a meagre 28 devices together, several of which were contained within BBN Technologies. Another researcher at BBN, Ben Thomas, had woken up one day with one question pressed into his brain, what if a computer program could walk? The idea of a piece of code moving itself from one place to another, completely unprompted, was at the time an outlandish concept. Thomas built the first organic, moving program, meters from Ray Tomlinsons desk. Creeper, as Thomas called it, had no malicious intent behind it. The output was not a cryptominer, blackmailer, or keylogger. Creeper had one simple job, an output of text to the mainframes teletype. “I’m the Creeper. Catch me if you can.”. Thomas was contracted to develop a resource-sharing program, called RSEXEC, so that users could develop applications that would move to and run on another computer which had RSEXEC installed. The goal was efficiency. Computers of the era had extremely limited processing power, so workloads could be shifted across the network. A program, for example, might be moved from the East Coast to the West Coast to avoid peak daytime usage. The concept was innocent.
Creeper didn’t think in the same way modern movement heavy programs do. It was built to work on a single small scale network build from PDP-10 Mainframe computers on the TENEX operating system, purely using RSEXEC as its carrier. In this sense, it already knew exactly where it could go next. In order to run ARPANET, a database needed to be maintained manually. listing each device. In order to run creeper, RSEXEC needed to be installed and provide an API to the software so that it could package itself and its data and ship itself to another RSEXEC instance on another computer which would unpack and fire up the application. Creeper selected a suitable target from this shared list, sent an API request to RSEXEC, and allowed the system to write its executable directly onto the remote computer. It then issued a command to launch the new instance. Finally in an act of binary brutality, it deleted itself from the host device. The cycle repeated. Creeper hopped endlessly from system to system, becoming a strange but familiar presence — a household pet of the ARPANET architecture.
Watching this experiment unfold gave Ray Tomlinson an idea: Unlike Thomas’s choice for the program to terminate itself after it moved, he envisioned applications where one thing led to another, and these moving platforms automated themselves to spread as required for their task. Consider an application designed to analyse data. If it detected relevant information on another machine, it could split off an alter ego — a second instance sent to process that data remotely, while the original continued working where it stood. In theory, each instance would terminate naturally once its analysis was complete. No capture mechanism would be required, assuming everything worked perfectly. But software rarely does. For example, the application might fail to realize it had already visited a data set and run forever repeatedly replicating and jumping from site to site. By staying behind and leaving a marker to other versions of the software that previous visitation has occurred, it provides a safety net for errors. In layman’s terms: software which is designed to operate across multiple computers will leave copies of itself in order to tell its future clones that it has already executed its process on a given device, preventing an endless loop without an end. A flag atop a castle. Without this failsafe, you have a very sad computer over long periods of time. Tomlinson adapted creeper to follow this pattern of replication and automation. Tomlinson had just created the worlds first computer worm.
A virus require a host. Air particles, human fluids, liquid. Computer virus’ require a host. An email attachment, a download, or the contents of a storage drive. Unlike the flu however, a computer virus requires human input to execute. Modern operating systems do not allow arbitrary code execution. Code only runs if the OS loader or interpreter explicitly starts it. Virus’ therefore, rely on deception, often requiring some form of trickery or social engineering to convince an unsuspecting victim to open the file which contains the hidden payload. This is a deliberate design choice behind modern operating systems. You can see this philosophy through the continued patching of the USB autorun exploit, where hackers copied the autorun.inf protocol from CD installers onto USB devices, enabling it to autorun a program once attached to the computer. The removal of this and other autorun exploits has led to a feeling of security amongst netizens. Many surf the web feeling untouchable as long as they don’t run any untrusted files. Worms are not so kind. They’re organic, volatile, slimey. By exploiting weaknesses in network services, worms spread without human involvement. They trigger remote code execution, moving directly from machine to machine. Once a target is identified, the worm uses a vulnerability ,or a misconfigured service, to install a copy of itself. The key difference between a worm and other malware is that a worm doesn’t need a person to move it. It doesn’t rely on someone opening a file or clicking a link. The worm does the work itself. Every infected machine becomes a new source. From each foothold, the worm repeats the process — scanning for additional systems, exploiting weaknesses, and copying itself again. The spread is exponential. One infection becomes many, rapidly. Even if the worm isn’t designed to cause damage, the act of spreading can still be destructive. It consumes bandwidth and processing power. Networks slow down. Systems lag.
The concept of a worm was unknown to Tomlinson, the phrase wasn’t even coined until 4 years later in John Brunners science fiction novel The Shockwave Rider. Despite this, Tomlinson had discovered a future issue in his present time. Creeper wouldn’t stop. By the time they deleted it on one system, it had moved to another. Tomlinson decided that another interesting challenge for himself would be to create a program to remove creeper permanently. After all, any future software he designed using the technique would require an uninstaller. Tomlinson was unknowingly creating the worlds first antivirus. Unlike modern antivirus software, which is built to destroy thousands of unique threats, this was built to remove just one. Tomlinson theorised that the best way to stop a self replicating program, was with an identical self replicating program. It did however have one key difference, instead of printing a line of text, it deleted creeper, moved to another device, checked for creeper, and then deleted itself if it wasn’t found. Reaper was born. The light to creepers dark. And as these two armies chased eachother and battled endlessly across cyberspace, Tomlinson, within the confines of ARPANET, had unknowingly just laid the foundations for the most notorious cyberpunks in human history. Tomlinson, had just birthed a kingmaker.
1988. You sit down at a desk at Cornell University. The desk is familiar to you, and yet this corner of ARPANET is not. You’ve used this machine every day for the past year. It responds to your touch, you make it dance like no other person could. This is where you feel most alive. You don’t however, follow your usual routine. Rather than entering through the Cornell portal to access your universities resources, you instead slot a floppy disk into the storage slot and enter through an MIT portal. You try your best to spoof the point of access. Only a fool would willingly break the rules from their own machine, and yet there’s nothing stopping you from pretending to use someone elses. As you transfer your work onto the MIT architecture, a thought crosses your mind. Will this work? You’ve built failsafe’s, examined the work of those before you, including creeper, and you’re confident in your code. You feel a pang of sadness, all being well, nobody will know what you’ve done, you’re aiming for subtlety, but credit would have been nice. As you unleash the program you’ve been working on a feeling of excitement hits your chest, and then nothing. A cold emptiness hits you. It’s done, just like that, the downfall of digital humanity has begun. You gather your things, log out of your university assigned machine for the final time, and head home to get some rest.
But as you dream in the endless void of sleep, a flurry of light streaks across the web. Connections flicker. A stampede of code assaults the unprepared walls of ARPANET. You’ve unleased no ordinary software.
.-- . / .- .-. . / -.-. ..- .-. .-. . -. - .-.. -.-- / ..- -. -.. . .-. / .- - - .- -.-. -.-
“We are currently under attack”.
MIT is the first to fall. Hours later, Berklee sends out an SOS. Within 24 hours, Harvard Princeton, Stanford, John Hopkins all collapse into a state of digital anarchy. NASA rips out every network cable it can find. Military research centres crawl to a halt. Each infected device experiences slowdowns. These get even slower. And slower. Until (analogue off sound) they stop. This is not an ordinary piece of software. It’s a Worm. The first malicious worm. Morris.
Morris would utilise two exploits to spread to other systems using the extremely common Unix operating system. The first utilised the fingerd daemon. A daemon is a constantly running background process, invisible to the average user, yet the cornerstone of network operations. Important enough that control of them could grant endless opportunity, quiet enough that no one would notice until it was too late. The finger daemon was designed to answer a simple question: “Who is logged in on this machine?” Users could query a remote system to get a list of logged-in users, along with additional information such as the user’s real name and login time. Because this service was useful for networked academic environments, it was widely enabled and accessible across ARPANET. Fingerd was written in the C programming language. At the time, C programs commonly used fixed-size buffers to hold input data. A buffer is essentially short term data storage. In the case of fingerd, the program accepted network input and copied it into a buffer, without checking how much data it had received. Too much data, and the program implodes. This is the classic pattern that leads to a stack buffer overflow: if an attacker supplies more data than the buffer can hold, the extra data overwrites adjacent memory on the stack. Morris utilised this concept to force an overflow by sending repeated ‘get requests’ to the daemon from an already infected machine on the network. Morris, however, was not trying to force the program to crash, instead it aimed to prompt remote code execution—the ability to make the remote machine run worm code without any human interaction. One thing this granted access to is the return address, a specific value stored on the call stack that tells the program where to continue execution after a function finishes. If the attacker overwrites the return address with a value they control, they can redirect execution to a location they choose. Instead of returning to the legitimate code, the program jumps to code the attacker has chosen. By exploiting this weakness, the worm effectively “injected” a small fragment of code into the fingerd process. Once executed, this injected code carried out a series of actions designed to establish a foothold on the compromised machine: it created a temporary file, wrote the worm’s bootstrap loader into that file, executed the loader file, and then removed evidence of its presence. In this way, the worm gained control of the system without ever needing to log in.
If this failed, it had a backup plan. Sendmail, the spiritual successor to Tomlinson’s work on email. ARPANET worked on trust. Nobody at this stage had broken that trust on a large scale. Administrators got complacent. They left things turned on for convenience rather than security. One of these conveniences was Sendmails debug mode. Sendmail ran as a network daemon, listening for incoming mail connections. When a remote machine connected to sendmail, it could issue commands as part of the mail delivery protocol. On many systems, debug functionality was reachable through those same network interactions. From an already infected machine, the worm connected to the sendmail service on a remote host, just as a legitimate mail server would. Instead of delivering a normal message, it invoked sendmail’s debug. Once in this mode, sendmail accepted commands that caused it to perform actions beyond simple mail routing. Among these actions was the ability to execute programs or shell commands as part of its diagnostic behavior. By issuing carefully constructed commands through the debug interface, the worm caused sendmail to write a small bootstrap program to disk and execute it. This bootstrap then retrieved the full worm code and launched it on the target machine. Because sendmail ran with high privileges, the worm gained immediate and broad access to the system.
By the time you wake up, 6000 computers are infected, around 10% of the worlds internet at the time. You hide in your room. You shouldn’t have been confident in your code. You’ve made a mistake.
Morris is spreading as intended. The entire program, is operating as intended. Bar from one line. Inspired by Michael Rabin’s theory of Randomized algorithm’s, you added an element of randomisation to your work. Morris is designed to raise a flag atop the fortress of each machine, indicating to other legions of the Morris army that this castle has already been taken in the name of King Morris. The problem with this approach is that once an admin is aware of what this flag looks like, they can raise the flag despite no siege taking place. You come up with a genius idea. One out of every 7 times, Morris ignores this flag and sieges the castle anyway. In your head, this means that Morris will brute force itself past the flag system. You didn’t think the implications of this through. If one legion of the Morris army is already inside, you suddenly find yourself with two legions in the same castle. Then three. Then four. Until eventually the castle is full. Then another legion, and ten more. Suddenly the entire building grinds to a halt. Nobody can move. The castle under the weight of the soldiers, implodes.
Hour 1. Nobody knows anything. Hour 12. Students and staff at Berklee have cut the head off the worm. Staff member Keith Bostic releases Unix patches 1 and 2 into the world. These block the debug command within Sendmail. Hour 36. Patch 3 changes finger to prevent a Get call by rewriting the syntax to only recognise the word fgets rather than simply gets. The worm does not understand this. It has not been told to. It does not evolve. Berklee is back online. The word doesn’t get out. Nobody else can hear it.
You start to panic. You didn’t plan for any of this. You phone a friend. He doesn’t answer, but another friend does. You explain what’s happened. He explains it to the original friend. They post an anonymous bulletin to the Usenet newsgroup forum on your behalf. They offer an apology, and direct everyone towards the kill switch. It’s too late. They can’t hear you. It takes weeks for things to return to normal. Maybe it never does.
As you sit at your desk at the New York times office, you can’t help but laugh. You’re trying to report on the worm, but everything at the office has ground to a standstill. The IT department is just as powerless as you are. You all sit and wait, trade stories of previous mishaps, you’re quietly enjoying the break from monotony. The phone rings, you sigh, turn back to face your work, and pick up the phone. “I know who did it”, the voice on the phone proclaims, “R T M”. They tell you no more. You turn on your machine against office orders, and have a thought that will shape the rest of your career. You search RTM within finger, and a name appears. Robert Tappan Morris. You chase the lead and find it belongs to a first year PHD student at Cornell. A computing major. You chase further and discover a father who works for the NSA. Robert Morris SR. You call the NSA and ask for Morris Sr. He calls back. “I know a few dozen people in the country who could have done it; I could have done it, but I’m a darned good programmer”. Did you? You ask? “No”. Did your Son? “Yes”. You leak the story. It’s on the front page. Morris Jr denies it. For a while. And then he steps out into the limelight. He becomes a digital prophet. The arsonist of Babylon.
1989. You sit in that New York Court House. Outside, the city moves as it always does.
You feel the hum of traffic through the wooden bench.
Footsteps echo off of dry pavement.
The sound reverberates through single-glazed windows.
A faint breeze cuts the air, like it always does.
To everyone else, May 9th doesn’t feel ordinary.
This case was the first major test of the Computer Fraud and Abuse Act of 1986 (CFAA), a new statute intended to criminalize unauthorized access and misuse of computer systems. As the journalist that broke the case open, you’ve been given front row seats to watch the last year of your lifes work come to an end. At the trial’s opening, Assistant U.S. Attorney Mark Rasch characterises the worm as more than an academic misadventure. He tells the jury that Robert Morris had launched “a full-scale assault” on computer networks, deliberately designing the worm to be hard to stop once it began spreading. He argues that Morris intended the code to break into computers he was not authorized to access, and that the network disruptions amounted to damage under the CFAA. Everyone you spoke to had one worry. Common belief was that the prosecution would have to prove he did it deliberately and knowingly. Morris’s defense attorney, , acknowledges that his client wrote the worm but frames it as a well-intentioned experiment, not a crime. He tells the jury that Morris had created the program to explore network security, believing it would propagate quietly without significant disruption. When the worm grew out of control, he sought help to stop it, contacting colleagues at Harvard to distribute a “kill” instruction. This was another mistake.
Under the Computer Fraud and Abuse Act, the government did not need to prove that Morris intended to cause damage. What it needed to prove was that he intentionally accessed computers without authorization. Morris had deliberately designed the worm to penetrate machines he did not have permission to use. From the court’s perspective, once that intentional unauthorized access was established, the fact that the resulting damage was unintended did not excuse the act. Robert Morris was not punished for the damage he caused. The crime was the physical boundaries the worm had broken, the holes it had slithered into. As you watch Morris’s defence break down into tiny fragments of code, the headline dawns on you. Robert Morris, the first man convicted of cybercrime.
1990. You watch Morris be sentenced. Three years of probation. 400 hours of community service. $10000 fine. People forget his first name. Morris lives on as an idea, a prototype for unspeakable acts of cyber warfare. Culturally, the case marked the end of the internet’s “innocent era.” The early ARPANET was built on trust: machines trusted other machines, administrators trusted users, and programmers trusted the network. The Morris Worm shattered that assumption. ARPANET has just been replaced by the world wide web. Worms have taken over. I am the Creeper. Catch me if you can.