Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Multipartite comments on Free to Optimize - Less Wrong

25 Post author: Eliezer_Yudkowsky 02 January 2009 01:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (80)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Multipartite 12 November 2011 04:10:59AM *  0 points [-]

'The second AI helped you more, but it constrained your destiny less.': A very interesting sentence. <nods>

On other parts, I note that the commitment to a range of possible actions can be seen as larger-scale than to a single action, even before which one is taken is chosen.

A particular situation that comes to mind, though:

Person X does not know of person Y, but person Y knows of person X. Y has an emotional (or other) stake in a tiebreaking vote that X will make; Y cannot be present on the day to observe the vote, but sets up a simple machine to detect what vote is made and fire a projectile through the head of X if X makes one vote rather than another (nothing happening otherwise).

Let it be given that in every universe that X votes that certain way, X is immediately killed as a result. It can also safely be assumed that in those universes Y is arrested for murder.

In a certain universe, X votes the other way, but the machine is later discovered. No direct interference with X has taken place, but Y who set up the machine (pointed at X's head, X's continued life unknowingly dependent on X's vote) presumably is guilty of a felony of some sort (which though, I wonder?).

Regardless of motivation, to have committed to potentially carry out a certain thing against X is treated as similarly serious to that of in fact having it carried out (or attempted to be carried out).

(This, granted, may focus on a concept within the above article without addressing the entire issue of planning another entity's life.)