Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AlexanderRM comments on Money: The Unit of Caring - Less Wrong

95 Post author: Eliezer_Yudkowsky 31 March 2009 12:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexanderRM 15 November 2014 02:43:12AM 2 points [-]

That phrasing certainly sounds like ad hoc rationalization. The rational (rationalist?*) way to go about that would be to... recognize that you attach value to some things associated with freedom from employment, try to figure out what exactly that is and quantify it while ignoring what your current actions are, and then determine whether your current actions are consistent with that, and change them if not. If you determine your values based on what your current actions are, there's no point in being rational.

I have a vague feeling like "rational" should mean "the way a hypothetical rational actor, such as an AI built for rationality, would act", and "rationalist" would mean "the way a human who recognizes that their brain is not built for rationality and actively tries to overcome thing would act". An AI built to be rational would never need to do this because their behavior would *already follow logically from their values. I don't remember why I put in this note, but it's an interesting thing about this site generally.

Comment author: Larks 15 November 2014 06:20:10PM 0 points [-]

Good point. CronoDAS's other comments suggest a desire to be free from commitments in general.

Also, welcome to LessWrong!