Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Phil_Goetz6 comments on Engelbart: Insufficiently Recursive - Less Wrong

11 Post author: Eliezer_Yudkowsky 26 November 2008 08:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Phil_Goetz6 27 November 2008 12:23:39AM 0 points [-]

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.

As you may have guessed, I think just the opposite. The idea that Eliezer, on his own, can figure out

  1. how to build an AI
  2. how to make an AI stay within a specified range of behavior, and
  3. what an AI ought to do

suggests that somebody has read Ender's Game too many times. These are three gigantic research projects. I think he should work on #2 or #3.

Not doing #1 would mean that it actually matters that he convince other people of his ideas.

I think that #3 is really, really tricky. Far beyond the ability of any one person. This blog may be the best chance he'll have to take his ideas, lay them out, and get enough intelligent criticism to move from the beginnings he's made, to something that might be more useful than dangerous. Instead, he seems to think (and I could be wrong) that the collective intelligence of everyone else here on Overcoming Bias is negligible compared to his own. And that's why I get angry and sometimes rude.

Generalizing from observations of points at the extremes of distributions, we can say that when we find an effect many standard deviations away from the mean, its position is almost ALWAYS due more to random chance than to the properties underlying that point. So when we observe a Newton or an Einstein, the largest contributor to their accomplishments was not their intellect, but random chance. So if you think you're relying on someone's great intellect, you're really relying on chance.