dripgrind comments on My true rejection - Less Wrong

-16 Post author: dripgrind 14 July 2011 10:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Normal_Anomaly 15 July 2011 12:25:59AM 1 point [-]

If you have a rigorous, detailed theory of Friendliness, you presumably also know that creating an Unfriendly AI is suicide and won't do it. If one competitor in the race doesn't have the Friendliness theory or the understanding of why it's important, that's a serious problem, but I don't see any programmer who understands Friendliness deliberately leaving it out.

Also, what little I know about browser design suggests that, say, supporting the blink tag is an extra chunk of code that gets added on later, possibly with a few deeper changes to existing code. Friendliness, on the other hand, is something built into every part of the system--you can't just leave it out and plan to patch it in later, even if you're clueless enough to think that's a good idea.

Comment author: dripgrind 15 July 2011 12:41:37AM 2 points [-]

OK, what about the case where there's a CEV theory which can extrapolate the volition of all humans, or a subset of them? It's not suicide for you to tell the AI "coherently extrapolate my volition/the shareholders' volition". But it might be hell for the people whose interests aren't taken into account.

Comment author: falenas108 15 July 2011 09:10:51AM *  2 points [-]

At that point, that particular company wouldn't be able to build the AI any faster than other companies, so at that point it's just a matter of getting an FAI out there first and have it optimize rapidly enough that it could destroy any UFAI that come along after.