JoshuaZ comments on An inflection point for probability estimates of the AI takeoff? - Less Wrong

11 Post author: Prismattic 29 April 2011 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 30 April 2011 05:20:01AM 1 point [-]

Do you mean to say that only something that approximates human intelligence can initiate an "AI takeoff"? If so, can you summarize your reasons for believing that?

So this is a valid point that betrays a possible unjustified leap in logic on my part. I think the thought process (although honestly I haven't thought about it that much) is something to the effect that any sufficiently powerful optimizer such that it can self-optimize for a substantial take-off is going to have to be able to predict and interact well enough with its environment that it will need to effectively solve the natural language problem and talk to humans (we are after all a major part of its environment until/unless it decides that we are redundant). But the justification for this is to some extent just weak intuition and the known sample of mind-space is very small, so intuitions informed by such experience should be suspect.

Comment author: TheOtherDave 30 April 2011 11:48:26AM 1 point [-]

(nods) Yeah, agreed.

I would take it further, though. Given that radically different kinds of minds are possible, the odds that the optimal architecture for supporting self-optimization for a given degree of intelligence happens to be something approximately human seem pretty low.

Comment author: NancyLebovitz 30 April 2011 02:16:34PM 1 point [-]

On the other hand, is there any way to think about the odds of humans inventing a program capable of self-optimization which doesn't resemble a human mind?

Comment author: TheOtherDave 30 April 2011 05:18:31PM 0 points [-]

I'm not sure.

I think if I had a better grasp of whether and why I think humans are (aren't) capable of building self-optimizing systems at all, I would have a better grasp of the odds of them being of particular types.