You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Giles comments on What does the world look like, the day before FAI efforts succeed? - Less Wrong Discussion

23 Post author: michaelcurzi 16 November 2012 08:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (63)

You are viewing a single comment's thread.

Comment author: Giles 17 November 2012 10:23:35PM 5 points [-]

Another dimension: value discovery.

  • Fantastic: There is a utility function representing human values (or a procedure for determining such a function) that most people (including people with a broad range of expertise) are happy with.
  • Pretty good: Everyone's values are different (and often contradict each other), but there is broad agreement as to how to aggregate preferences. Most people accept FAI needs to respect values of humanity as a whole, not just their own.
  • Sufficiently good: Many important human values contradict each other, with no "best" solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached.
Comment author: loup-vaillant 18 November 2012 01:35:41PM 2 points [-]

I'm tempted to add:

*Not So Good: the FAI team, or one team member, takes over the world. (Imagine an Infinite Doom spell done right.)

Comment author: ciphergoth 24 November 2012 10:03:55AM 1 point [-]

I would much rather see any single human being's values take over the future light cone than a paperclip maximizer!

Comment author: loup-vaillant 25 November 2012 11:09:28AM 0 points [-]

So would I. It's not so good, but it's not so bad either.

Comment author: hankx7787 18 November 2012 12:50:28AM *  -2 points [-]

Agree with your Fantastic but disagree with how you arrange the others... it wouldn't be rational to favor a solution which satisfies others' values in larger measure at the expense of one's own values in smaller measure. If the solution is less than Fantastic, I'd rather see a solution which favors in larger measure the subset of humanity with values more similar to my own, and in smaller measure the subset of humanity whose values are more divergent from my own.

I know, I'm a damn, dirty, no good egoist. But you have to admit that in principle egoism is more rational than altruism.

Comment author: Giles 18 November 2012 02:34:44AM 0 points [-]

OK - I wasn't too sure about how these ones should be worded.