The Future of Humanity Institute recently reported the results of a survey conducted at their 2011 Winter Intelligence conference. The survey asked participants, who came from fields like philosophy, computer science and engineering, and AI and robotics, several questions about the future of machine intelligence, and one of the results is somewhat worrying. Participants were asked the following question:

How positive or negative are the ultimate consequences of the creation of a  human‐level (and beyond human-level) machine intelligence likely be?

They were asked to assign probabilities to: extremely good, good, neutral, bad, and extremely bad. Here is a box-and-whisker plot of the results.

The most likely outcome is extremely bad. Eyeing it up it looks like a good outcome of any degree (extremely good + good) is less likely than a bad outcome of any degree (extremely bad + bad).  Given that these experts think that the result is most likely very bad, why do we hear such little discussion about how to stop intelligent machines from being invented? In response to a question about what kind of organization was most likely to develop machine intelligence, the most probable was the military. This means we have something of a lever with which to try and slow them down. Should DARPA be shut down?

Participants were also asked when human-level machine intelligence would likely be developed. The cumulative distribution below shows their responses:

The median estimate of when there is a 50% chance is 2050. That suggests we have around 40 years to enjoy before the extremely bad outcome of human-level robot intelligence arrives. The report presents a list of milestones which participants said will let us know that human-level intelligence is within 5-years. I suppose this will be a useful guide for when we should start panicking. A sample of these include:

  • Winning an Oxford union‐style debate
  • Worlds best chess playing AI was written by an AI
  • Emulation/development of mouse level machine intelligence
  • Full dog emulation…
  • Whole brain emulation, semantic web
  • Turing test or whole brain emulation of a primate
  • Toddler AGI
  • An AI that is a human level AI researcher
  • Gradual identification of objects: from an undifferentiated set of unknown size- parking spaces, dining chairs, students in a class‐ recognition of particular objects amongst them with no re‐conceptualization
  • Large scale (1024) bit quantum computing (assuming cost effective for researchers), exaflop per dollar  conventional computers, toddler level intelligence
  • Already passed, otherwise such discussion among ourselves would not have been funded, lat alone be tangible, observable and accordable on this scale: as soon as such a thought is considered a ‘reasonable’ thought to have

There you have it. These are things to look out for, which may foretell a robot disaster is on the horizon. Of course if that last respondent is right, it’s probably too late already.

About these ads