XiXiDu comments on So You Want to Save the World - Less Wrong

41 Post author: lukeprog 01 January 2012 07:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 28 December 2011 03:22:17PM *  0 points [-]

I read Luke as making three claims there, two explicit and one implicit:

  1. If science continues recursively self-improving AI is inevitable.
  2. recursively self-improving AI will eventually outstrip human intelligence.
  3. This will happen relatively soon after the AI starts recursively self-improving.

1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle, and unless we are talking about things like "there's no such thing as intelligence" or "intelligence is boolean" I don't sufficiently understand what it would even mean for that to be impossible in principle to assign probability mass to worlds like that.
The two other claims make sense to assign lower probability to, but the inevitable part referred to the first claim (which also was the one you quoted when you asked) and I answered for that. Even if I disagreed on it being inevitable, that seems to be what Luke meant.

Comment author: XiXiDu 28 December 2011 03:51:23PM *  2 points [-]

1) Is true as long as long as there is no infallible outside intervention and recursively self-improving AI is possible in principle...

Stripped of all connotations this seems reasonable. I was pretty sure that he meant to include #2,3 in what he wrote and even if he didn't I thought it would be clear that I meant to ask about the SI definition rather than the most agreeable definition of self-improvement possible.