Crux comments on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? - Less Wrong

17 Post author: bokov 25 September 2013 11:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: Crux 26 September 2013 01:00:18AM *  1 point [-]

What does this mean? Terminal values are techniques by which we predict future phenomenon? Doesn't sound like we're talking about values anymore, but my only understanding of what it would mean for something to be part of the map is that it would be part of how we model the world, i.e. how we predict future occurrences.

Comment author: chaosmage 26 September 2013 10:00:39AM 0 points [-]

They're theories by which we predict future mental states (such as satisfaction) - our own or those of others.

Comment author: fubarobfusco 28 September 2013 08:13:50PM -1 points [-]

What does this mean?

The agents that we describe in philosophical or mathematical problems have terminal values. But what confidence have we that these problems map accurately onto the messy real world? To what extent do theories that use the "terminal values" concept accurately predict events in the real world? Do people — or corporations, nations, sub-agents, memes, etc. — behave as if they had terminal values?

I think the answer is "sometimes" at best.

Sometimes humans can be money-pumped or Dutch-booked. Sometimes not. Sometimes humans can end up in situations that look like wireheading, such as heroin addiction or ecstatic religion ... but sometimes they can escape them, too. Sometimes humans are selfish, sometimes spendthrift, sometimes altruistic, sometimes apathetic, sometimes self-destructive. Some humans insist that they know what humans' terminal values are (go to heaven! have lots of rich, smart babies! spread your memes!) but other humans deny having any such values.

Humans are (famously) not fitness-maximizers. I suggest that we are not necessarily anything-maximizers. We are an artifact of an in-progress amoral optimization process (biological evolution) and possibly others (memetic evolution; evolution of socioeconomic entities); but we may very well not be optimizers ourselves at all.