Computerworld

Will AI surpass human intelligence by 2020?

Computer scientist and science fiction writer Vernor Vinge still stands by a prediction he made in 1993
  • Peter Moon (Unknown Publication)
  • 27 May, 2007 22:00

Exactly ten years ago, in May 1997, Deep Blue won the chess tournament against Gary Kasparov. Was that the first glimpse of a new kind of intelligence?

I think there was clever programming in Deep Blue, but the predictable success came mainly from the ongoing trends in computer hardware improvement. The result was a better-than-human performance in a single, limited problem area. In the future, I think that improvements in both software and hardware will bring success in other intellectual domains.

In 1993 you gave your famous, almost prophetic, speech on “Technological Singularity”. Can you please describe the concept of Singularity?

It seems plausible that with technology we can, in the fairly near future, create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event — such a singularity — are as unimaginable to us as opera is to a flatworm.

Do you still believe in the coming singularity?

I think it’s the most likely non-catastrophic outcome of the next few decades.

Does the explosion of the internet and grid computing ultimately accelerate this event?

Yes. There are other possible paths to the Singularity, but at the least, computers+communications+people provides a healthy setting for further intellectual leaps.

When intelligent machines finally appear, what will they look like?

Most likely they will be less visible than computers are now. They would mostly operate via the networks and the processors built into the ordinary machines of our everyday life. On the other hand, the results of their behaviour could be very spectacular changes in our physical world. (One exception: mobile robots, even before the Singularity, will probably become very impressive, with devices that are more agile and coordinated than human athletes, even in open-field situations).

How would we be certain about its conscience?

The hope and the peril is that these creatures would be our “mind children”. As with any child, there is a question of how moral they may grow up to be, and yet there is good reason for hope. (Of course, the peril is that these particular children are much more powerful than natural ones.)

Stephen Hawking defended in 2001 the genetic enhancing of our species in order to compete with intelligent machines. Do you believe it would be feasible, even practical?

I think it’s both practical and positive — and subject to the same qualitative concerns as the computer risks. In the long run I don’t think organic biology can keep up with hardware. On the other hand, organic biology is robust in different ways than machine hardware. The survival of life is best served by preserving and enhancing both strategies.

Could nanotechnology, genetic engineering and quantum computers represent a threat to Mankind, as Bill Joy, the former Sun executive, warned in 2000 with his “Why the future doesn’t need us”?

The world (and the universe) is full of mortal threats. Technology is the source of some of those threats — but it has also protected us from others. I believe that technology itself is life’s surest response to the ongoing risks.

Right now the Pentagon is employing 5,000 robots in Iraq, patrolling cities, disarming explosives or making reconnaissance flights. The next step is allowing them to carry weapons. Does this lead to a “Terminator” scenario?

That’s conceivable, though not a reason for turning away from robotics in general. Old-fashioned thermonuclear world war and some types of biowarfare are much simpler, more likely, and probably more deadly than the “Terminator” scenario.

You set the plot of your last novel, Rainbows End, in 2025. It’s a world where people Google all the time, everywhere, using wearable computers, and omnipresent sensors. Do you think this is a plausible future?

It was about the most plausible (non-catastrophic) 2025 scenario that I could think of.

It is a little scary, isn’t it? Is this the great conspiracy against human freedom?

Before the personal computer, most people thought computers were the great enemy of freedom. When the PC came along, many people realised that millions of computers in the hands of citizens were a defence against tyranny. Now in the new millennium, we see how governments can use networks for overarching surveillance and enforcement; that is scary.

One of the ideas I am trying to get at with Rainbows End is the possibility that government abuse may turn out to be irrelevant: As technology becomes more important, governments need to provide the illusion of freedom for the millions of people who must be happy and creative in order for the economy to succeed.

Altogether, these people are more diverse and resourceful (and even more coordinated!) than any government. Online databases, computer networks and social networks give this trend an enormous boost. In the end, that “illusion of freedom” may have to be more like the real thing than has ever been true in history.

With the internet, the people may achieve a new kind of populism, powered by deep knowledge, self-interest so broad as to reasonably be called tolerance, and an automatic, preternatural vigilance.