You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on Truth vs Utility - Less Wrong Discussion

1 Post author: Qwake 13 August 2014 05:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: Gunnar_Zarncke 13 August 2014 08:33:43AM 2 points [-]

This cries fr a poll. To make this into a more balanced question I changed the "simulation" variant into something more 'real':

Suppose Omega, a supercomputer, comes down to Earth to offer you a choice:

Submitting...

Comment author: Creutzer 13 August 2014 10:45:45AM 3 points [-]

Granting the question's premise that we have a utility function, you have just defined option 1 as the rational choice.

Comment author: Slider 13 August 2014 12:03:25PM 1 point [-]

Indeed it collapses in my to a question of:

Would you rather

Submitting...

Comment author: Slider 20 August 2014 07:08:11PM 0 points [-]

I expected the cross options to no receive votes. But they did. That mean it was not an equivalent question and some of the details did matter. I would find use for a clarification on what those were.

Comment author: Nornagest 20 August 2014 07:37:41PM *  0 points [-]

Yeah, granted that premise and given that maximizing utility may very well involve telling you stuff, option 2 seems to imply one of the following:

  • you don't trust Omega
  • you don't trust your utility function
  • you have objections (other than trust) to accepting direct help from an alien supercomputer

The second of these possibilities seems the most compelling; we aren't Friendly in a strong sense. Depending on Omega's idea of your utility function, you can make an argument that maximizing it would be a disaster from a more general perspective, either because you think your utility function's hopelessly parochial and is likely to need modification once we better understand metaethics and fun theory, or because you don't think you're really all that ethical at whatever level Omega's going to be looking at. This latter is almost certainly true, and the former at least seems plausible.

Comment author: Gunnar_Zarncke 13 August 2014 01:24:15PM 0 points [-]

Judging from the vote that doesn't seem to be the case. I guess the options are still not phrased precisely enough. Probably utility needs to be made more clear.