AI Has Arrived, and That Really Worries the World’s Brightest Minds
On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race. That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg. Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers. AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts. “Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.” Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”
Google Gets on Board
Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings. That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.
Deciding the dos and don’ts of scientific research is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designed to prevent manmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab. At the Puerto Rico conference, attendees signed a letter outlining the research priorities for AI—study of AI’s economic and legal effects, for example, and the security of AI systems. And yesterday, Elon Musk kicked in $10 million to help pay for this research. These are significant first steps toward keeping robots from ruining the economy or generally running amok. But some companies are already going further. Last year, Canadian roboticists Clearpath Robotics promised not to build autonomous robots for military use. “To the people against killer robots: we support you,” Clearpath Robotics CTO Ryan Gariepy wrote on the company’s website. Pledging not to build the Terminator is but one step. AI companies such as Google must think about the safety and legal liability of their self-driving cars, whether robots will put humans out of a job, and the unintended consequences of algorithms that would seem unfair to humans. Is it, for example, ethical for Amazon to sell products at one price to one community, while charging a different price to a second community? What safeguards are in place to prevent a trading algorithm from crashing the commodities markets? What will happen to the people who work as bus drivers in the age of self-driving vehicles?
To the people against killer robots: we support you.
Itamar Arel is the founder of Binatix, a deep learning company that makes trades on the stock market. He wasn’t at the Puerto Rico conference, but he signed the letter soon after reading it. To him, the coming revolution in smart algorithms and cheap, intelligent robots needs to be better understood. “It is time to allocate more resources to understanding the societal impact of AI systems taking over more blue-collar jobs,” he says. “That is a certainty, in my mind, which will take off at a rate that won’t necessarily allow society to catch up fast enough. It is definitely a concern.” Predictions of a destructive AI super-mind may get the headlines, but it’s these more prosaic AI worries that need to be addressed within the next few years, says Murray Shanahan, a professor of cognitive robotics with Imperial College in London. “It’s hard to predict exactly what’s going on, but we can be pretty sure that they are going to affect society.”
The following is a comment from a Facebook friend, Shane C which he left under a topic I posted on Facebook yesterday; Victoria Rollison’s “Abbott is hiding from the future“. It deserves wider readership than the Facebook page could offer and it is my pleasure to post it here. You will agree that Shane raises some thought-provoking points. Here is what Shane said:
The Cabinet of the new Australian government has just one woman, no science ministry, only one member who takes scientists’ findings on global warming to be true (but who believes a future proof broadband network is not needed), and a particularly dense education minister who thinks advanced tertiary education is a privilege that only the rich should have.
This is a cause of great concern.
A number of breakthroughs in science and technology will start to emerge over the next five to twenty years. Some of the main ones will be breakthroughs in medical science, breakthroughs in materials technology, artificial intelligence, quantum computing, and nanotechnology.
A number of consequences will follow which will include but not limited to; the cure for all diseases, not just cancer, resulting in very long life, the catch being that only the rich and privileged can afford it, automation of labour in all mining and heavy industry resulting in the sudden unemployment of all non-skilled labour.
Because of the exponential nature of technological development, the nations that develop the three key technologies first, AI (artificial intelligence), QC (quantum computing), and NT (nanotechnology) will rule the world. Their medical and materials technologies will be as to the rest of the world as current technological societies are to stone age ones.
Not only will the nations who get to the key twenty-first century technologies first have a gigantic economic advantage but access to advanced technologies will bootstrap their advantage even further because they will go on to develop advanced means to accessing space with AI designed, NT grown, single stage to orbit (SSTO) spacecraft that will be relatively cheap to produce and operate.
We could be looking at a twenty-first century that will be dominated by a few AI/QC/NT enhanced societies taking humanity’s first truly permanent steps out into space. Those nations that will be successful will be those who take scientific research seriously, provide the world’s best telecommunications infrastructure, and provide their populations with the best access to medical care and education.
And the new Australian government is not interested in any of this
If this is the future of warfare and intelligence gathering, rest assured it won’t only be Washington doing it.
Last month philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA’s venture-capital arm (via the Atlantic):
Let’s look at some current and future scenarios. These go beyond obvious intelligence, surveillance, and reconnaissance (ISR), strike, and sentry applications, as most robots are being used for today. I’ll limit these scenarios to a time horizon of about 10-15 years from now.
Military surveillance applications are well known, but there are also important civilian applications, such as robots that patrol playgrounds for pedophiles (for instance, in South Korea) and major sporting events for suspicious activity (such as the 2006 World Cup in Seoul and 2008 Beijing Olympics). Current and future biometric capabilities may enable robots to detect faces, drugs, and weapons at a distance and underneath clothing. In the future, robot swarms and “smart dust” (sometimes called nanosensors) may be used in this role.
Robots can be used for alerting purposes, such as a humanoid police robot in China that gives out information, and a Russian police robot that recites laws and issues warnings. So there’s potential for educational or communication roles and on-the-spot community reporting, as related to intelligence gathering.
In delivery applications, SWAT police teams already use robots to interact with hostage-takers and in other dangerous situations. So robots could be used to deliver other items or plant surveillance devices in inaccessible places. Likewise, they can be used for extractions too. As mentioned earlier, the BEAR robot can retrieve wounded soldiers from the battlefield, as well as handle hazardous or heavy materials. In the future, an autonomous car or helicopter might be deployed to extract or transport suspects and assets, to limit US personnel inside hostile or foreign borders.
In detention applications, robots could also be used to not just guard buildings but also people. Some advantages here would be the elimination of prison abuses like we saw at Guantanamo Bay Naval Base in Cuba and Abu Ghraib prison in Iraq. This speaks to the dispassionate way robots can operate. Relatedly–and I’m not advocating any of these scenarios, just speculating on possible uses–robots can solve the dilemma of using physicians in interrogations and torture. These activities conflict with their duty to care and the Hippocratic oath to do no harm. Robots can monitor vital signs of interrogated suspects, as well as a human doctor can. They could also administer injections and even inflict pain in a more controlled way, free from malice and prejudices that might take things too far (or much further than already).