timtyler comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 28 December 2010 07:12:49PM *  0 points [-]

I looked at http://lesswrong.com/lw/wf/hard_takeoff/

I was left pretty puzzled about what "AI go FOOM" was actually intended to mean. The page shies away from making any kind of quantifiable statement.

You seem to be assigning probabilities to this - as though it is a well defined idea - but what is it supposed to mean?

Comment author: XiXiDu 28 December 2010 07:45:02PM 2 points [-]

You seem to be assigning probabilities to this - as though it is a well defined idea - but what is it supposed to mean?

I know (I don't), but since I asked Rain to assign probabilities to it I felt that I had to state my own as well. I asked him to do so because I read that some people are arguing in favor of making probability estimates, to say a number. But since I haven't come across much analysis that actually does state numbers I thought I'd ask a donor who contributed the current balance of his bank account.

Comment author: Vaniver 28 December 2010 07:30:43PM 1 point [-]

what is it supposed to mean?

My understanding is it means "the AI gets to a point where software improvements allow it to outpace us and trick us into doing anything it wants us to, and understand nanotechnology at a scale that it soon has unlimited material power."

Instead of 1e-4 I'd probably put that at 1e-6 to 1e-9, but I have little experience accurately estimating very low probabilities.

(The sticking point of my interpretation is something that seems glossed over in the stuff I've read about it- that the AI only has complete access to software improvements. If it's working on chips made of silicon, all it can do is tell us better chip designs (unless it's hacked a factory, and is able to assemble itself somehow). Even if it's as intelligent as EY imagines it can be, I don't see how it could derive GR from a webcam quality picture; massive intelligence is no replacement for scant evidence. Those problems can be worked around- if it has access to the internet, it's got a lot of evidence and a lot of power- but suggest that in some limited cases FOOM is very improbable.)

Comment author: timtyler 29 December 2010 08:28:11PM *  0 points [-]

I am pretty sure that the "FOOM" term is an attempt to say something about the timescale of the growth of machine intelligence. So, I am sceptical about definitions which involve the concept of trickery. Surely rapid growth need not necessarily involve trickery. My FOOM sources don't seem to mention trickery. Do you have any references relating to the point?

Comment author: ata 29 December 2010 08:31:34PM *  2 points [-]

The bit about "trickery" was probably just referencing the weaknesses of AI boxing. You are correct that it's not essential to the idea of hard takeoff.