Stuart_Armstrong comments on Arguments against the Orthogonality Thesis - Less Wrong

-7 Post author: JonatasMueller 10 March 2013 02:13AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 03 January 2014 12:36:34PM *  0 points [-]

If OT is false , then some or no combinations of goals and intelligence are possible. Oracle .AI could still fall in the set of limited combinations.

The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.

You are possibly the first person in the world to do think that morality has something to do with your copies.

Negotiating with your copies is the much easier version of negotiating with other people.

Comment author: TheAncientGeek 03 January 2014 01:44:03PM -1 points [-]

The argument is that given an Oracle and an entity of limited intelligence that has goal G, we can construct a superintelligent being with goal G by having the limited intelligence ask the Oracle how to achieve G.

But it still might not be possible, in which case the Oracle will not be of help. That scenario only removes difficulties due to limited intelligence on the builders part.

Negotiating with your copies is the much easier version of negotiating with other people.

I don't have any copies I can interact with, so how can it be easy?

I still don't see the big problem with MR. In other conversations, people have put it to me that MR is impossilbe because it is impossible to completely satisfy everyone's preferecnces. It is impossible to completely satisfy everyone's preferecnces, but that is not soemthing MR requires. It is kind of obvious thar morality in genral requires compromises and sacrifices, since we see that happening all the time. in the real world.