timtyler comments on Open Thread, August 2010 - Less Wrong

4 Post author: NancyLebovitz 01 August 2010 01:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (676)

You are viewing a single comment's thread. Show more comments above.

Comment author: kmeme 03 August 2010 11:54:55PM 0 points [-]

Wow good stuff. Especially liked yours not linked above:

http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/

I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn't going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.

But you make a very good case that the whole thing is bunk. I especially like the "different levels of intelligence" point, had not heard that before re: AI.

But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.

I wonder if someone 50 years ago said "some day computers will display high quality video and everyone will watch computers instead of TV or film". Sure it is happening, but it's a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.

Comment author: timtyler 04 August 2010 08:57:00AM *  1 point [-]

I am not sure what you mean about the "different levels of intelligence" point. Maybe this:

"A machine intelligence that is of "roughly human-level" is actually likely to be either vastly superior in some domans or vastly inferior in others - simply because machine intelligence so far has proven to be so vastly different from our own in terms of its strengths and weaknesses [...]"

Comment author: kmeme 04 August 2010 11:16:59AM -1 points [-]

Actually by "different levels of intelligence" I meant your point that humans themselves have very different levels of intelligence, one from the other. That "human-level AI" is a very broad target, not a narrow one.

I've never seen it discussed does an AI require more computation to think about quantum physics than to think about what order to pick up items in the grocery store? How about training time? Is it a little more or orders of magnitude more? I don't think it is known.

Comment author: timtyler 04 August 2010 11:40:36AM *  2 points [-]

Human intelligence can go down pretty low at either end of life - and in sickness. There is a bit of a lump of well people in the middle, though - where intelligence is not so widely distributed.

The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of that range will be gradually swallowed up.

Comment author: Baughn 04 August 2010 11:33:10AM 0 points [-]

More? If anything, I suspect thinking about quantum physics takes less intelligence; it's just not what we've evolved to do. An abstraction inversion, of sorts.

Hm. I also have this pet theory that some past event (that one near-extinction?) has caused humans to have less variation in intelligence than most other species, thus causing a relatively egalitarian society. Admittedly, this is something I have close to zero evidence for - I'm mostly using it for fiction - but it would be interesting to see, if you've got evidence for or (I guess more likely) against.