From open letters signed by Elon Musk and Stephen Hawking to telly dramas like Channel 4’s Humans, 2015 has been all about the possible future of artificial intelligence (AI).
“I think we should be very careful about artificial intelligence,” said Musk. “If I had to guess at what our biggest existential threat is, it’s probably that… I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
We may still be decades away from achieving artificial general intelligence (AGI), as opposed to today’s narrow, goal-based AI and AI-faking digital assistants. (Hey, Cortana! Hey, Cortana! I said: hey Cortana!!). Most AI projects are benign, like teaching a robot to play jazz or helping scientists to map the ocean floor.
But there are still enough advances to give people pause for thought. Here are some of the bigger ones:
Mo Brains, Less Problems
Neural networks — essentially vast computer systems modelled after the human brain — have been responsible for many of the recent breakthroughs in artificial intelligence.
We’re still way off building a computer with anywhere near the complexity of the human brain. But this year a cognitive computing company, Digital Reasoning, announced that it has trained the world’s largest neural network — with a massive 160 billion parameters.
The previous record holder? Google, with 11.2 billion parameters. Researchers at Digital Reasoning are currently using the neural network for what they call “word math” for semantic analysis, but the possibilities are limitless.
Secrets of the DeepMind
Industry watchers were amazed when Google snapped up the British AI startup DeepMind Technologies in 2014 for an amount estimated to be upwards of $400 million.
Setting out its mission to “solve intelligence,” earlier this year DeepMind published research showing how it has developed an AI capable of learning to play a variety of classic video games, without any external teaching. The development hints at the possibility of multi-purpose AI that could learn to solve a variety of goals. Watch a DeepMind algorithm learn to play Breakout below.
Trial And error, robot style
But solving problems isn’t just limited to computers playing video games. Robots, such as U.C. Berkeley’s BRETT (Berkeley Robot for the Elimination of Tedious Tasks), have been shown capable of acquiring new skills through a variety of teaching methods — including watching instructional videos.
The breakthrough opens the door for general purpose robots which could have greater autonomy and move more easily from task to task, rather than having to be built for specific jobs.
Autonomously driving Miss Daisy
We’re used to reports about Google’s self-driving car project, but there’s no getting around the fact that autonomous vehicles are the biggest AI breakthrough of recent years.
Even a decade ago, experts were claiming no vehicle would ever be driven by an algorithm. Today, such vehicles are starting to roll out around the world — including recent news of a self-driving truck. With this has come research on how to make self-driving vehicles ethically responsible: a recurring theme we’re hearing more and more about in modern AI.
The road to a Terminator future
The subject of a recent open letter voicing concern about the future of AI dealt with the use of autonomous weapons being used in warfare. The letter’s authors described such weapons as “the third revolution in warfare, after gunpowder and nuclear arms.”
The technology they are fretting over includes everything from smart drones and anti-drone systems to the use of AI in planes. The ultra-expensive F-35 jet reportedly includes more artificial intelligence than any previous warplane. The idea is that autonomous weapons could help save the lives of thousands of soldiers who would otherwise be put in harm’s way.
Should we be worried? As AI expert Sir Nigel Shadbolt pointed out in an interview with Techworld, “It’s not artificial intelligence that worries me. It’s human stupidity.”
He added: “If we’re stupid and decide to put lots of AI technology into seek and destroy robots, and we don’t have a way of reinserting ourselves in the loop or deciding when they should or shouldn’t hit the kill button, then we have been really stupid.” — Luke Dormehl (@lukedormehl)
Main image credit: maxuser/Shutterstock