Monday, November 21, 2011

The Singularity is Nearer Than You Think


I’m reading The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil. While I’ll post a full book review later, I wanted to share a few thoughts about his basic calculations concerning when artificial intelligence will surpass human intelligence.

On the one hand, I agree with the basic calculations. Ray Kurzweil uses a base value of 10^16 calculations per second (cps) needed to mimic human intelligence based on functional equivalence, I’m more inclined to believe the 10^19 cps based on physical structure simulation. That’s because I have a hard time believing that we’re going to design something more elegant than nature. (Look at Janine Benyus’s Biomimicry, which provides scores of examples that demonstrate that nature is still way ahead of us when it comes to effective solutions to real world problems.)

While the author seems to believe there is progress on AI, I have to ask if that feels true. Computers don’t feel smarter to me. They have incrementally improved between 1986 and 2006, but are they smarter? About the smartest thing my computer does is remember which menu items I click on frequently and hide the ones I don’t click on. That’s not smart, that’s a simple usage algorithm. The smartest thing the web has done is recommend products that I might like based on purchasing habits.

That is pretty smart, but is it surprising? (In later chapters the author does lament the trend of downplaying the contribution of AI by classifying previously hard problems as simple algorithms.)
We’ve replaced the 640K DOS memory limit with the 100mb Outlook Exchange server limit at my company. Is email smarter? No, it has just grown in capability.

The technology innovations that amaze me are people driven, not computer driven. Wikipedia harnesses the power of people, not computers. Ruby on Rails is the brainchild of a few outstanding hackers. But it is still a pretty simple application of rules.

I do think we’ll hit that singularity though, but it’ll be with brute force, not because we’re clever engineers and can implement functional equivalence better than nature can.

But even though it will be through brute force, I still think we’ll hit it earlier than Ray believes. That’s because Ray posits that the singularity will occur only when the technology is a billion times more power than a single human brain. The mistake is thinking that we must exceed so radically the brain capacity of humans. I think that innovation comes not from mass of brain power, but from concentration of brain power. The really big ideas come from a few brilliant people. Are those brilliant people a million times smart than you or I? Or are they just a dozen times smarter, and by virtue of that concentration of brain power in one mind, capable of having the critical mass needed to achieve breakthroughs.

In other words, I think that those first few AIs that are brought online will exceed our intelligence, and rapidly design the next generation of themselves, and proceed at a pace far exceeding our own capability to follow.

Ray believes that the hardware technology necessary to simulate human comparable AIs will existing somewhere around 2020, the software by 2030 and the singularity (exponentially spirally increases in machine intelligence) will occur by 2040.

My timeline is somewhat different. I think the hardware will get there between 2020 and 2025, and the software will immediately follow. (If you have followed my technology projections spreadsheet, you’ll know that you can predict when technology uses such as mp3 sharing, internet phone calling, and videostreaming occur by hardware capability, not software. The trend appears to indicate that if the hardware exists, out of the great masses of humanity, the software will arise more or less spontaneously.)

The singularity will then occur within just a few years (i.e. by the late 2020s) as machine intelligent greatly increases the exponential acceleration of technology and solves the truly difficult problems (nanotechnology, effective gene manipulation) that we’re making only slow progress with. My estimate is that we reach the singularity at least 10 years before Ray's best estimate, even though my estimate starts with the assumption that we need to do a neuron-interaction and behavior level simulation.

No comments:

Post a Comment