AI Has Arrived, and That Really Worries the World’s Brightest Minds


AI Has Arrived, and That Really Worries the World’s Brightest Minds
robots-AI-crop On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race. That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg. Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers. AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts. “Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.” Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings. That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

Deciding the dos and don’ts of scientific research is the kind of baseline ethical work that molecular biologists did during the 1975 Asilomar Conference on Recombinant DNA, where they agreed on safety standards designed to prevent manmade genetically modified organisms from posing a threat to the public. The Asilomar conference had a much more concrete result than the Puerto Rico AI confab. At the Puerto Rico conference, attendees signed a letter outlining the research priorities for AI—study of AI’s economic and legal effects, for example, and the security of AI systems. And yesterday, Elon Musk kicked in $10 million to help pay for this research. These are significant first steps toward keeping robots from ruining the economy or generally running amok. But some companies are already going further. Last year, Canadian roboticists Clearpath Robotics promised not to build autonomous robots for military use. “To the people against killer robots: we support you,” Clearpath Robotics CTO Ryan Gariepy wrote on the company’s website. Pledging not to build the Terminator is but one step. AI companies such as Google must think about the safety and legal liability of their self-driving cars, whether robots will put humans out of a job, and the unintended consequences of algorithms that would seem unfair to humans. Is it, for example, ethical for Amazon to sell products at one price to one community, while charging a different price to a second community? What safeguards are in place to prevent a trading algorithm from crashing the commodities markets? What will happen to the people who work as bus drivers in the age of self-driving vehicles?

To the people against killer robots: we support you.

Itamar Arel is the founder of Binatix, a deep learning company that makes trades on the stock market. He wasn’t at the Puerto Rico conference, but he signed the letter soon after reading it. To him, the coming revolution in smart algorithms and cheap, intelligent robots needs to be better understood. “It is time to allocate more resources to understanding the societal impact of AI systems taking over more blue-collar jobs,” he says. “That is a certainty, in my mind, which will take off at a rate that won’t necessarily allow society to catch up fast enough. It is definitely a concern.” Predictions of a destructive AI super-mind may get the headlines, but it’s these more prosaic AI worries that need to be addressed within the next few years, says Murray Shanahan, a professor of cognitive robotics with Imperial College in London. “It’s hard to predict exactly what’s going on, but we can be pretty sure that they are going to affect society.”

Australia | New Right Wing Government Leaps Forward into The Dark Ages


New Australian Government Cancels the Future

The following is a comment from a Facebook friend, Shane C which he left under a topic I posted on Facebook yesterday; Victoria Rollison’s “Abbott is hiding from the future“. It deserves wider readership than the Facebook page could offer and it is my pleasure to post it here. You will agree that Shane raises some thought-provoking points. Here is what Shane said:

The Cabinet of the new Australian government has just one woman, no science ministry, only one member who takes scientists’ findings on global warming to be true (but who believes a future proof broadband network is not needed), and a particularly dense education minister who thinks advanced tertiary education is a privilege that only the rich should have.

This is a cause of great concern.

A number of breakthroughs in science and technology will start to emerge over the next five to twenty years. Some of the main ones will be breakthroughs in medical science, breakthroughs in materials technology, artificial intelligence, quantum computing, and nanotechnology.

A number of consequences will follow which will include but not limited to; the cure for all diseases, not just cancer, resulting in very long life, the catch being that only the rich and privileged can afford it, automation of labour in all mining and heavy industry resulting in the sudden unemployment of all non-skilled labour.

Because of the exponential nature of technological development, the nations that develop the three key technologies first, AI (artificial intelligence), QC (quantum computing), and NT (nanotechnology) will rule the world. Their medical and materials technologies will be as to the rest of the world as current technological societies are to stone age ones.

Not only will the nations who get to the key twenty-first century technologies first have a gigantic economic advantage but access to advanced technologies will bootstrap their advantage even further because they will go on to develop advanced means to accessing space with AI designed, NT grown, single stage to orbit (SSTO) spacecraft that will be relatively cheap to produce and operate.

We could be looking at a twenty-first century that will be dominated by a few AI/QC/NT enhanced societies taking humanity’s first truly permanent steps out into space. Those nations that will be successful will be those who take scientific research seriously, provide the world’s best telecommunications infrastructure, and provide their populations with the best access to medical care and education.

And the new Australian government is not interested in any of this

.

(Photo credit: Toban B.)

Scary! Robots Will Control Us All!


Perhaps the scariest article you’ll read all year (robots will soon control us all)

Robots, Robotics, Artificial Intelligence, AI, Rise of the Machines, Rise of the Robots:-

If this is the fu­ture of war­fare and in­tel­li­gence gath­er­ing, rest as­sured it won’t only be Wash­ing­ton doing it.

Last month philoso­pher Patrick Lin de­liv­ered this brief­ing about the ethics of drones at an event hosted by In-Q-Tel, the CIA’s ven­ture-cap­i­tal arm (via the At­lantic):

Let’s look at some cur­rent and fu­ture sce­nar­ios. These go be­yond ob­vi­ous in­tel­li­gence, sur­veil­lance, and re­con­nais­sance (ISR), strike, and sen­try ap­pli­ca­tions, as most ro­bots are being used for today. I’ll limit these sce­nar­ios to a time hori­zon of about 10-15 years from now.

Mil­i­tary sur­veil­lance ap­pli­ca­tions are well known, but there are also im­por­tant civil­ian ap­pli­ca­tions, such as ro­bots that pa­trol play­grounds for pe­dophiles (for in­stance, in South Korea) and major sport­ing events for sus­pi­cious ac­tiv­ity (such as the 2006 World Cup in Seoul and 2008 Bei­jing Olympics). Cur­rent and fu­ture bio­met­ric ca­pa­bil­i­ties may en­able ro­bots to de­tect faces, drugs, and weapons at a dis­tance and un­der­neath cloth­ing. In the fu­ture, robot swarms and “smart dust” (some­times called nanosen­sors) may be used in this role.

Ro­bots can be used for alert­ing pur­poses, such as a hu­manoid po­lice robot in China that gives out in­for­ma­tion, and a Russ­ian po­lice robot that re­cites laws and is­sues warn­ings. So there’s po­ten­tial for ed­u­ca­tional or com­mu­ni­ca­tion roles and on-the-spot com­mu­nity re­port­ing, as re­lated to in­tel­li­gence gath­er­ing.

In de­liv­ery ap­pli­ca­tions, SWAT po­lice teams al­ready use ro­bots to in­ter­act with hostage-tak­ers and in other dan­ger­ous sit­u­a­tions. So ro­bots could be used to de­liver other items or plant sur­veil­lance de­vices in in­ac­ces­si­ble places. Like­wise, they can be used for ex­trac­tions too. As men­tioned ear­lier, the BEAR robot can re­trieve wounded sol­diers from the bat­tle­field, as well as han­dle haz­ardous or heavy ma­te­ri­als. In the fu­ture, an au­tonomous car or he­li­copter might be de­ployed to ex­tract or trans­port sus­pects and as­sets, to limit US per­son­nel in­side hos­tile or for­eign bor­ders.

In de­ten­tion ap­pli­ca­tions, ro­bots could also be used to not just guard build­ings but also peo­ple. Some ad­van­tages here would be the elim­i­na­tion of prison abuses like we saw at Guan­tanamo Bay Naval Base in Cuba and Abu Ghraib prison in Iraq. This speaks to the dis­pas­sion­ate way ro­bots can op­er­ate. Re­lat­edly–and I’m not ad­vo­cat­ing any of these sce­nar­ios, just spec­u­lat­ing on pos­si­ble uses–ro­bots can solve the dilemma of using physi­cians in in­ter­ro­ga­tions and tor­ture. These ac­tiv­i­ties con­flict with their duty to care and the Hip­po­cratic oath to do no harm. Ro­bots can mon­i­tor vital signs of in­ter­ro­gated sus­pects, as well as a human doc­tor can. They could also ad­min­is­ter in­jec­tions and even in­flict pain in a more con­trolled way, free from mal­ice and prej­u­dices that might take things too far (or much fur­ther than al­ready).