In the films and television shows belonging to the Terminator franchise, the world is eventually turned into an apocalyptic wasteland by computers and robots bent on the destruction of humanity. Those who survive the computers’ initial onslaught are forced to live underground and form human resistance groups. The humans eventually discover time travel and repeatedly send missions to the past to try to stop the computers before they launch their initial attack.
This theme was also repeated in the Matrix series and the Blade Runner series, albeit with different details.
Although these movies exist purely in the realm of science fiction, like most good science fiction, they are based on some scientific reality. In the case of the Terminator and Matrix franchises, the scientific theory that they are based on is known as singularity. Simply put, singularity is defined as a point where technology has advanced so much that it can no longer be controlled. More specifically, it refers to the point at which Artificial Intelligence (AI) becomes “self-aware” and no longer needs humans to carry out programs and other functions. In theory, AI can then create other programs, which in turn can replicate themselves and so on and so on. Essentially, we’d be looking at a new life-form.
I’m sure you can see the dilemmas that singularity may present, and so too did the late scientist Stephen Hawking and inventor/entrepreneur Elon Musk. The new AI life-form would still have the logic of its computer lineage, but it would be imbued with a newfound desire to protect its life at all costs. It could see humans as both a threat to its life and unnecessary for its continued existence.
That’s when it starts building killer robots to go after us!
Not all agree that singularity is inevitable, though. Psychologist Steven Pinker and philosopher John Searle argue that computers will never be able to reach singularity because they aren’t imbued with a human mind, while others have similarly argued that singularity will never happen because AI lacks a soul.
But none of singularity’s detractors are scientists.
It appears that the true question is not if singularity will happen, but what will it look like when it does happen?
Will we all be placed into vessels in order for a giant mainframe to use our bodily fluids for power? Will anthropomorphic killer robots be unleashed on the world the instant AI becomes self-aware? Or will the AI look just like us and develop feelings and be known as “skin jobs?”
The reality is that it probably won’t be as cool as any of these examples. When AI becomes self-aware, it will still need the help of humans to carry out many of its functions. Since we already have a certain amount of AI in our lives, many experts argue that the change will probably just make things run much more efficiently and that we shouldn’t worry about anything.
Killer robots make for better fiction, right?