simpleton comments on Short versions of the basic premise about FAI - Less Wrong

3 Post author: NancyLebovitz 31 October 2010 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: nhamann 31 October 2010 11:45:28PM *  2 points [-]

Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I've often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).

Edit: See also Vladimir Nesov's summary, which is quite good, but not quite as short as you're looking for here.

Comment author: simpleton 31 October 2010 11:53:24PM 0 points [-]
Comment author: nhamann 01 November 2010 12:41:14AM *  0 points [-]

I do not understand your point. Would you care to explain?

Comment author: simpleton 01 November 2010 03:13:27AM 1 point [-]

Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.

Comment author: nhamann 01 November 2010 04:22:47AM 0 points [-]

Oh, I misunderstood your link. I agree, that's a good summary of the idea behind the "complexity of value" hypothesis.