nhamann comments on Short versions of the basic premise about FAI - Less Wrong

3 Post author: NancyLebovitz 31 October 2010 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: nhamann 31 October 2010 11:45:28PM *  2 points [-]

Potentially yes, but I think the problem can be profitably restated without any reference to the Singularity or FOOMing AI. (I've often wondered whether the Friendliness problem would be better recognized and accepted if it was presented without reference to the Singularity).

Edit: See also Vladimir Nesov's summary, which is quite good, but not quite as short as you're looking for here.

Comment author: NancyLebovitz 01 November 2010 12:14:01AM 1 point [-]

Friendliness would certainly be worth pursuing-- it applies to a lot of human issues in addition to what we want from computer programs.

Still, concerns about FOOM is the source of urgency here.

Comment author: NihilCredo 01 November 2010 03:04:52AM *  7 points [-]

Concerns about FOOM are also what makes SIAI look like (and some posters talk like) a loony doom cult.

Skip the "instant godlike superintelligence with nanotech arms" shenanigans, and AI ethics still remains an interesting and important problem, as you observed.

But it's much easier to get people to look at an interesting problem so you can then persuade them that it's serious, than it is to convince them that they are about to die in order to make them look at your problem. Especially since modern society has so inured people to apocalyptic warnings that the wiser half of the population takes them with a few kilograms of salt to begin with.

Comment author: simpleton 31 October 2010 11:53:24PM 0 points [-]
Comment author: nhamann 01 November 2010 12:41:14AM *  0 points [-]

I do not understand your point. Would you care to explain?

Comment author: simpleton 01 November 2010 03:13:27AM 1 point [-]

Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.

Comment author: nhamann 01 November 2010 04:22:47AM 0 points [-]

Oh, I misunderstood your link. I agree, that's a good summary of the idea behind the "complexity of value" hypothesis.