Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Erik3 comments on Nonsentient Optimizers - Less Wrong

16 Post author: Eliezer_Yudkowsky 27 December 2008 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Erik3 27 December 2008 12:34:53PM 2 points [-]

The following may or may not be relevant to the point Unknown is trying to make.

I know I could go out and kill a whole lot of people if I really wanted to. I also know, with an assigned probability higher than many things I consider sure, that I will never do this.

There is no contradiction between considering certain actions within the class of things you could do (your domain of free will if you wish) and at the same time assign a practically zero probability to choosing to take them.

I envision a FAI reacting to a proof of its own friendliness with something along the lines of "tell me something I didn't know".

(Do keep in mind that there is no qualitative difference between the cases above; not even a mathematical proof can push a probability to 1. There is always room for mistakes.)