Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

PK comments on Taboo Your Words - Less Wrong

71 Post author: Eliezer_Yudkowsky 15 February 2008 10:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

Sort By: Old

You are viewing a single comment's thread.

Comment author: PK 16 February 2008 08:32:18PM 6 points [-]

The game is not over! Michael Vassar said: "[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration."

For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.

Whats are "shared strong attractors"? You cant use the words "shared", "strong", "attractor" or any synonyms.

What's a "high-level reflective aspiration"? You can't use the words "high-level", "reflective ", "aspiration" or any synonyms.

----------- Caledonian said: "Then declaring the intention to create such a thing takes for granted that there are shared strong attractors."

We can't really say if there are "shared strong attractors" one way or the other until we agree on what that means. Otherwise it's like arguing about wither falling trees make "sound" in the forest. We must let the taboo game play out before we start arguing about things.

Comment author: Normal_Anomaly 27 November 2011 10:54:54PM *  1 point [-]

Shared strong attractors: values/goals that more than [some percentage] of humans would have at reflective equilibrium.

high-level reflective aspirations: ditto, but without the "[some percentage] of humans" part.

Reflective equilibrium*: a state in which an agent cannot increase its expected utility (eta: according to its current utility function) by changing its utility function, thought processes, or decision procedure, and has the best available knowledge with no false beliefs.

*IIRC this is a technical term in decision theory, so if the technical definition doesn't match mine, use the former.

Comment author: [deleted] 28 November 2011 12:34:53AM 0 points [-]

a state in which an agent cannot increase its expected utility by changing its utility function

Surely if you could change your utility function you could always increase your expected utility that way, e.g. by defining the new utility function to be the old utility function plus a positive constant.

Comment author: wnoise 28 November 2011 12:37:55AM *  1 point [-]

I think Normal_Anomaly means "judged according to the old utility function".

EDIT: Incorrect gender imputation corrected.

Comment author: Normal_Anomaly 28 November 2011 12:43:48AM *  3 points [-]

I do mean that, fixed. By the way, I am female (and support genderless third-person pronouns, FWIW).

Comment author: [deleted] 28 November 2011 10:28:00AM 0 points [-]

Thank you, that makes sense to me now.