Cover Story: Man Vs Machine

 

SUNDAY POST Dec 21-27

The war between artificial intelligence and human intellect is getting nastier with Stephen Hawking, one of the smartest people on earth, sounding the death knell for the human race. But Sunday POST comes close to a greater realisation, that is, what it means to be human.

Recent statement of famed scientist Stephen Hawking that “The development of full artificial intelligence (AI) could spell the end of the human race” has given a jolt to the tech-dependent world as the cosmologist warns that “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” It may read like the script of a notorious movie where cunning automatons overpower the human race and slaughter them eventually — not to forget, we have movies like The Matrix, Wall-E, The Terminator series and 2001: A Space Odyssey that underpin our fear — but Hawking’s apocalyptic vision is far from fiction.
The scientist, who as a result of his motor neuron disease is almost totally paralysed, uses a voice synthesiser to communicate. Recently he has been using a new system that employs AI to map how Hawking thinks and accordingly suggests words he might want to use next.
In spite of receiving a “life-changing upgrade” to the computer software that he is currently using, he didn’t mince his words when it came to charting out the existential threats from superintelligence.
This is not the first time that he has been outright about AI. Earlier, he has also said that “It would be the biggest event in human history. Unfortunately, it might also be the last.”
Is Hawking alone in his fear? While Stanford University has recently invited leading thinkers from across countries to begin a 100-year long comprehensive study on the long-term implications of AI in all aspects of life, joins Hawking in his anti-AI campaign a clutch of intellectuals from various fields.

Singularity looms large
Singularity—a point when machine intelligence surpasses human intelligence—being the most feared subject in the Info Age, scientists apprehend imbuing robots with the ability to both replicate themselves and increase the rate at which they get smarter would be faster and they would one day outsmart humans. Here are the noted AI opponents talking about their concerns:

n Oxford philosophy professor Nick Bostrom lays out his concerns in his book, Superintelligence: Paths, Dangers, Strategies. He cites an example:
“Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity. Later, horses were substituted for by automobiles and tractors. When horses became obsolete as a source of labour, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained. The same dark outcome could happen to humans once AI makes our labour and intelligence obsolete.”
The biggest question posed here is can humans escape the harrowing fate of the horses?

Six decades ago, mathematician John von Neumann first talked of AI singularity which was later popularised by inventor and futurologist Ray Kurzweil. In fact, Ray has also prophesised that singularity would strike the planet as early as 2045.
n Tech entrepreneur and co-founder of SpaceX (space transport) and Tesla (electric cars) Elon Musk is of the view that artificial intelligence is “potentially more dangerous than nukes”. He too tweeted in August this year: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”
n Superintelligence has been fodder to many writers. Way back in 1977, Professor Kevin Warwick of Reading University while promoting his book March of the Machines talked about the possible threats from AI.
James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era has something really interesting to share: “Can a submarine swim? Yes, but it doesn’t swim like a fish. Does an airplane fly? Yes, but not like a bird. Artificial intelligence won’t be like us, but it will be the ultimate intellectual version of us. We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest. So when there is something smarter than us on the planet, it will rule over us on the planet.”
n Bonnie Docherty, a lecturer on law at Harvard University, draws a parallel between autonomous weapons and nuclear weapons. The senior researcher at Human Rights Watch says, “The race to build autonomous weapons with artificial intelligence — which is already underway — is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place now before we get to a point where machines are killing people on the battlefield.”
You can well imagine what it will lead to if such technology is not stopped right now. Her reports about “killer” robots send shivers down one’s spine. “If one state develops it, then another state will develop it. And machines that lack morality and mortality should not be given power to kill.”

 

Cataclysmic!
The scariest thing of all is when such technologies would be used by the military for wrong reasons. So, to keep potential chaos at bay there should be a board on AI safety and ethics. Earlier this year, search-engine giant Google acquired Deep Mind, a neuroscience-inspired, artificial intelligence company based in London, and they have joined hands to ensure AI technologies develop safely. AI developers should definitely be cautious about the ethical consequences of what they are going to reap. All for a peaceful world!

Are we hyping it?
But at the same time opponents of the practice rule out any such possibility of technology-driven apocalypse in the near future.
Mark Bishop, Professor of cognitive computing at Goldsmiths, University of London, cashes in on key human abilities, such as understanding and consciousness to drive out the singularity trepidation as these traits are fundamentally lacking in so-called “intelligent” computers. “This lack means that there will always be a ‘humanity gap’ between any artificial intelligence and a real human mind. Because of this gap a human working in conjunction with any given AI machine will always be more powerful than that AI working on its own,” avers the professor.
We conclude there are two ways AI could function. Either it could greatly improve our lives by solving the world’s perennial problems like disease and hunger. Or, it could outsmart us and possibly seize our power. Today, they may seem so innocuous, but as they are given more power these automatons may not take long to fly off the handle. Initially, the glitches may be small but eventually they may spiral out of control triggering huge loss. Imagine a manless car pulling out on a highway in the midnight due to a bug in the software, or a rogue computer tossing the stock market causing billions in damage, or worse still, a medical robot, originally programmed to treat cancer, concluding that the best way to obliterate the malignant cells is by eliminating the host himself as he is genetically prone to the disease!

Automatons taking over humanity

The intelligence exhibited by machines or software is called artificial intelligence. It is an academic field of study. Around a hundred years ago, Czech filmmaker Karel Capek coined the word “robot”, meaning “slave”. Today, robots have taken over sectors like pharmaceuticals and cosmetics, food processing, rubber and plastics, metals, electronics manufacturing and auto manufacturing. In the recent days, countries like Canada, Netherlands, Austria, Belgium, France, Taiwan, Finland, Spain, United States, Denmark, Sweden, Italy, Germany, Japan and South Korea are witnessing increasing rate of robots thwarting manpower in several fields.

Did you know?

For the past two decades, the artificial-intelligence community has been organising a competition every year
to confer the Loebner Prize. It is called the Turing Test. The test is named after British mathematician Alan Turing, one of the founders of computer science and the same
person who quite famously predicted that by the year 2000, computers would be able to fool 30 per cent of human judges after five minutes of chit-chat though this has not yet come to pass. In 1950, he attempted to answer one of the vexing questions of recent times, that is ‘Can machines think?’ While we are yet to arrive at a conclusion that whether it would ever be possible to construct a computer so superior that it could actually have its own mind to think, the participants and judges of the test try a hand at it each year.

Exit mobile version