Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: loup-vaillant 25 January 2012 09:26:18PM *  2 points [-]

I really should have taken 5 minutes to ponder it. You convinced me, your choice is the better one.

But now that I think of it, I have another suggestion : « Affronter la Singularité » ("Confront the Singularity"), which, while still relatively close to the original meaning, may be even more catchy. The catch is, this word is more violent. It depicts the Singularity as something scary.

I'll take some time reviewing your translation. If you want to discuss it in private, I'm easy to find. (By the way, I have a translation of "The Sword of Good" pending. Would you —or someone else— review it for me?)

Comment author: Florent_Berthet 25 January 2012 10:34:02PM 0 points [-]

"Affronter la Singularité" is a good suggestion but like you said it's a bit aggressive. I wish we had a better word for "Facing" but I don't think the french language has one.

I'd gladly review your translation, check your email.

Comment author: loup-vaillant 25 January 2012 04:59:30PM 0 points [-]

The title should probably be "Faire face à la singularité" (french titles aren't usually capitalized, so, no capitalized "S" at the beginning of "singularité").

I gathered that "Facing the Singularity" was meant to convey a sense of action. "Face à la singularité" on the other hand is rather passive, as if the singularity would be given or imposed.

(Note: I'm a native French speaker.)

Comment author: Florent_Berthet 25 January 2012 07:52:53PM *  1 point [-]

Translator of the articles here.

I actually pondered the two options at the very beginning of my work, and both seem equally good to me. "Face à la singularité" means something like "In front of the singularity" while "Faire face à la singularité" is closer indeed to "Facing the Singularity". But the first one sounds better in french (and is catchier), that's why I chose it. It is a little less action oriented but it doesn't necessarily imply passivity.

It wouldn't bother me to take the second option though, it's a close choice. Maybe other french speakers could give their opinion?

About the capitalized "S" of "Singularity", it's also a matter of preference, I put it to emphasize that we are not talking about any type of singularity (not a mathematical one for example), but it could go either way too. (I just checked the wikipedia french page for "technical singularity", and it's written with a capitalized "S" about 50% of the time...)

Other remarks are welcomed.

Comment author: DanArmak 11 August 2009 04:01:16PM 1 point [-]

What you describe are hedons. It's misleading to call them utilons. For rational (not human) agents, utilons are the value units of a utility function which they try to maximize. But humans don't try to maximize hedons, so hedons are not human-utilons.

Comment author: Florent_Berthet 11 August 2009 05:12:09PM 1 point [-]

Then would you agree that any utility function should, in the end, maximize hedons (if we were rational agents, that is) ? If yes, that would mean that hedons are the goal and utilons are a tool, a sub-goal, which doesn't seem to be what OP was saying.

In response to Utilons vs. Hedons
Comment author: DanArmak 10 August 2009 10:30:55PM *  13 points [-]

This discussion has made me feel I don't understand what "utilon" really means. Hedons are easy: clearly happiness and pleasure exist, so we can try to measure them. But what are utilons?

  • "Whatever we maximize"? But we're not rational, quite inefficient, and whatever we actually maximize as we are today probably includes a lot of pain and failures and isn't something we consciously want.

  • "Whatever we self-report as maximizing"? Most of the time this is very different from what we actually try to maximize in practice, because self-reporting is signaling. And for a lot of people it includes plans or goals that, when achieved, are likely (or even intended) to change their top-level goals drastically.

  • "If we are asked to choose between two futures, and we prefer one, that one is said to be of higher utility." That's a definition, yes, but it doesn't really prove that the collection-of-preferred-universes can be described any more easily than the real decision function of which utilons are supposed to be a simplification. For instance, what if by minor and apparently irrelevant changes in the present, I can heavily influence all of people's preferences for the future?

Also a note on the post:

Akrasia is what happens when we maximize our hedons at the expense of our utilons.

That definition feels too broad to me. Typically akrasia has two further atttributes:

  • Improper time discounting: we don't spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.

  • Feeling so bad due to not doing the necessary task that we don't really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience - but we just can't get started!

Comment author: Florent_Berthet 11 August 2009 03:39:46PM *  -1 points [-]

Has anybody ever proposed a way to value utilons?

It would be easier to discuss about them if we knew exactly what they can mean, that is, in a more precise way than just by the "unit of utility" definition. For example, how to handle them through time?

So why not defining them with something like that :

Suppose we could precisely measure the level of instant happiness of a person on a linear scale between 1 to 10, with 1 being the worst pain imaginable and 10 the best of climaxes. This level is constantly varying, for everybody. In this context, one utilon could be the value of an action that is increasing the level of happiness of a person by one, on this scale, during one hour.

Then, for example, if you help an old lady to cross the road, making her a bit happier during the next hour (let's say she would have been around 6/10 happy but thanks to you she will be 6,5/10 happy during this hour), then your action has a utility of one half of a utilon. You just created 0.5 utilon, and it's a definitely valid statement, isn't that great?

Using that, a hedon is nothing more than a utilon that we create by raising our own happiness.