asciilifeform comments on Why safety is not safe - Less Wrong

48 Post author: rwallace 14 June 2009 05:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: asciilifeform 15 June 2009 02:01:09PM *  1 point [-]

How are truly fundamental breakthroughs made?

Usually by accident, by one or a few people. This is a fine example.

ought to be more difficult than building an operating system

I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat's dictum that "intelligence is ten million rules." I suspect that the legendary missing "key" to AGI is something which could ultimately fit on a t-shirt.

Comment author: Z_M_Davis 15 June 2009 02:21:07PM 5 points [-]

I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. [...] my sole justification [...] is that a number of pyramid-style AGI projects of heroic proportions have been attempted and failed miserably.

"Reversed Stupidity is Not Intelligence." If AGI takes deep insight and a pyramid, then we would expect those projects to fail.

Comment author: asciilifeform 15 June 2009 02:34:23PM 0 points [-]

Fair enough. It may very well take both.