davidpearce comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: davidpearce 13 March 2013 09:02:55PM 1 point [-]

Tim, in practice, yes. But this is as true in physics as in normative decision theory. Consider the computational challenges faced by, say, a galactic-sized superintelligence spanning 100,000 odd light years and googols of quasi-classical Everett branches.

[yes, you're right about definitions - but I hadn't intended to set out a rival Decision Theory FAQ. As you've probably guessed, all that happened was my vegan blood pressure rose briefly a few days ago when I read burger-choosing Jane being treated as a paradigm of rational agency.]

Comment author: timtyler 16 March 2013 09:40:57PM *  -2 points [-]

Tim, in practice, yes. But this is as true in physics as in normative decision theory. [...]

That's what I mean. It kinda sounds as though you are arguing against physics.

Comment author: davidpearce 16 March 2013 10:47:39PM 0 points [-]

Tim, on the contrary, I was arguing that in weighing how to act, the ideal rational agent should not invoke privileged reference frames. Egocentric Jane is not an ideal rational agent.

Comment author: timtyler 17 March 2013 11:19:57AM *  -1 points [-]

Embodied agents can't avoid "privileged reference frames", though. They are - to some degree - out of touch with events distant to them. The bigger the agent gets, the more this becomes an issue. It becomes technically challenging for Jane to take account of Jill's preferences when Jill is far away - ultimately because of locality in physics. Without a god, a "god's eye view" is not very realistic. It sounds as though your "ideal rational agent" can't be embodied.

Comment author: davidpearce 17 March 2013 12:15:48PM -1 points [-]

Tim, an ideally rational embodied agent may prefer no suffering to exist outside her cosmological horizon; but she is not rationally constrained to take such suffering - or the notional preferences of sentients in other Hubble volumes - into consideration before acting. This is because nothing she does as an embodied agent will affect such beings. By contrast, the interests and preferences of local sentients fall within the scope of embodied agency. Jane must decide whether the vividness and immediacy of her preference for a burger, when compared to the stronger but dimly grasped preference of a terrified cow not to have her throat slit, disclose some deep ontological truth about the world or a mere epistemological limitation. If she's an ideal rational agent, she'll recognise the latter and act accordingly.

Comment author: timtyler 17 March 2013 05:30:18PM *  1 point [-]

The issue isn't just about things beyond cosmological horizons. All distances are involved. I can help my neighbour more easily than I can help someone from half-way around the world. The distance involved entails expenses relating to sensory and motor signal propagation. For example, I can give my neighbour 10 bucks and be pretty sure that they will receive it.

Of course, there are also other, more important reasons why real agents don't respect the preferences of others. Egocentricity is caused more by evolution than by simple physics.

Lastly, I still don't think you can hope to use the term "rational" in this way. It sounds as though you're talking about some kind of supermorality to me. "Rationality" means something too different.

Comment author: whowhowho 17 March 2013 06:21:51PM 1 point [-]

Rationality doesn't have to mean morality to have implications for morality: since you can reason about just about anything, rationality has implications for just about everything.

Comment author: davidpearce 17 March 2013 06:17:08PM *  0 points [-]

Tim, all the above is indeed relevant to the decisions taken by an idealised rational agent. I just think a solipsistic conception of rational choice is irrational and unscientific. Yes, as you say, natural selection goes a long way to explaining our egocentricity. But just because evolution has hardwired a fitness-enhancing illusion doesn't mean we should endorse the egocentric conception of rational decision-making that illusion promotes. Adoption of a God's-eye-view does entail a different conception of rational choice.

Comment author: timtyler 17 March 2013 07:07:01PM 2 points [-]

I just think a solipsistic conception of rational choice is irrational and unscientific.

Surely that grossly mischaracterises the position you are arguing against. Egoists don't think that other agents don't have minds. They just care more about themselves than others.

But just because evolution has hardwired a fitness-enhancing illusion doesn't mean we should endorse the egocentric conception of rational decision-making that illusion promotes.

Again, this seems like very prejudicial wording. Egoists aren't under "a fitness-enhancing illusion". Illusions involve distortion of the contents of the senses during perception. Nothing like that is involved in egoism.

Comment author: davidpearce 17 March 2013 07:49:21PM -1 points [-]

There are indeed all sorts of specific illusions, for example mirages. But natural selection has engineered a generic illusion that maximised the inclusive fitness of our genes in the ancestral environment. This illusion is that one is located at the centre of the universe. I live in a DP-centred virtual world focused on one particular body-image, just as you live in a TT-centred virtual world focused on a different body-image. I can't think of any better way to describe this design feature of our minds than as an illusion. No doubt an impartial view from nowhere, stripped of distortions of perspective, would be genetically maladaptive on the African savanna. But this doesn't mean we need to retain the primitive conception of rational agency that such systematic bias naturally promotes.

Comment author: timtyler 17 March 2013 09:44:06PM *  2 points [-]

Notice that a first person perspective doesn't necessarily have much to do with adaptations or evolution. If you build a robot, it too is at the centre of its world - simply because that's where its sensors and actuators are. This makes maximizing inclusive fitness seem like a bit of a side issue.

Calling what is essentially a product of locality an "illusion" still seems very odd to me. We really are at the centre of our perspectives on the world. That isn't an illusion, it's a true fact.