davidpearce comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: davidpearce 18 March 2013 11:43:21AM 0 points [-]

notsonewuser, a precondition of rational agency is the capacity accurately to represent the world. So in a sense, the local witch-doctor, a jihadi, and the Pope cannot act rationally - maybe "rational" relative to their conceptual scheme, but they are still essentially psychotic. Epistemic and instrumental rationality are intimately linked. Thus the growth of science has taken us all the way from a naive geocentrism to Everett's multiverse. Our idealised Decision Theory needs to reflect this progress. Unfortunately, trying to understand the nature of first-person facts and subjective agency within the conceptual framework of science is challenging, partly because there seems no place within an orthodox materialist ontology for the phenomenology of experience; but also because one has access only to an extraordinarily restricted set of first-person facts at any instant - the contents of a single here-and now. Within any given here-and-now, each of us seems to be the centre of the universe; the whole world is centred on one's body-image. Natural selection has designed us -and structured our perceptions - so one would probably lay down one's life for two of one's brothers or eight of one's cousins, just as kin-selection theory predicts; but one might well sacrifice a small third-world country rather than lose one's child. One's own child seems inherently more important than a faraway country of which one knows little. The egocentric illusion is hugely genetically adaptive. This distortion of perspective means we're also prone to massive temporal and spatial discounting. The question is whether some first-person facts are really special or ontologically privileged or deserve more weight simply because they are more epistemologically accessible? Or alternatively, is a constraint on ideal rational action that we de-bias ourselves?

Granted the scientific world picture, then, can it be rational to take pleasure in causing suffering to other subjects of experience just for the sake of it? After all, you're not a mirror-touch synaesthete. Watching primitive sentients squirm gives you pleasure. But this is my point. You aren't adequately representing the first-person perspectives in question. Representation is not all-or-nothing; representational fidelity is dimensional rather than categorical. Complete fidelity of representation entails perfectly capturing every element of both the formal third-person facts and subjective-first-person facts about the system in question. Currently, none of us yet enjoys noninferential access to other minds - though technology may shortly overcome our cognitive limitations here. I gather your neocortical representations sometimes tend causally to covary with squirming sentients. Presumably, their squirmings trigger the release of endogenous opioids in your hedonic hotspots, You enjoy the experience! (cf. http://news.nationalgeographic.com/news/2008/11/081107-bully-brain.html) But insofar as you find the first-person state of being panic-stricken as in any way enjoyable, you have misrepresented its nature. By analogy, a masochist might be turned on watching a video involving ritualised but nonconsensual pain and degradation. The co-release of endogenous opioids within his CNS prevents the masochist from adequately representing what's really happening from the first-person perspective of the victim. The opioids colour the masochist's representations with positive hedonic tone. Or to use another example, stimulate the relevant bit of neocortex with microelectrodes and you will find everything indiscriminately funny (cf.http://news.bbc.co.uk/1/hi/sci/tech/55893.stm ) - even your child drowning before your eyes. Why intervene if it's so funny? Although the funniness seems intrinsic to one's representations, they are misrepresentations to the extent they mischaracterise the first-person experiences of the subject in question. There isn't anything intrinsically funny about a suffering sentient. Rightly or wrongly, I assume that full-spectrum superintelligences will surpass humans in their capacity impartially to grasp first-person and third-person perspectives - a radical extension of the runaway mind-reading prowess that helped drive the evolution of disjunctively human intelligence.

So, no, without rewiring your brain, I doubt I can change your mind. But then if some touchy-feely superempathiser says they don't want to learn about quantum physics or Bayesian probability theory, you probably won't change their mind either. Such is life. If we aspire to be ideal rational agents - both epistemically and instrumentally rational - then we'll impartially weigh the first-person and third-person facts alike.

Comment author: notsonewuser 30 March 2013 06:52:04PM 3 points [-]

Hi David,

Thanks for your long reply and all of the writing you've done here on Less Wrong. I only hope you eventually see this.

I've thought more about the points you seem to be trying to make and find myself in at least partial agreement. In addition to your comment that I'm replying to, this comment you made also helped me understand your points better.

Watching primitive sentients squirm gives you pleasure. But this is my point. You aren't adequately representing the first-person perspectives in question. Representation is not all-or-nothing; representational fidelity is dimensional rather than categorical. Complete fidelity of representation entails perfectly capturing every element of both the formal third-person facts and subjective-first-person facts about the system in question.

Just to clarify, you mean that human representation of others' pain is only represented using a (very) lossy compression, am I correct? So we end up making decisions without having all the information about those decisions we are making...in other words, if we computed the cow's brain circuitry within our own brains in enough detail to feel things the way they feel from the perspective of the cow, we obviously would choose not to harm the cow.

So, no, without rewiring your brain, I doubt I can change your mind. But then if some touchy-feely superempathiser says they don't want to learn about quantum physics or Bayesian probability theory, you probably won't change their mind either. Such is life. If we aspire to be ideal rational agents - both epistemically and instrumentally rational - then we'll impartially weigh the first-person and third-person facts alike.

In at least one class of possible situations, I think you are definitely correct. If I were to say that my pleasure in burning ants outweighed the pain of the ants I burned (and thus that such an action was moral), but only because I do not (and cannot, currently) fully empathize with ants, then I agree that I would be making such a claim irrationally. However, suppose I already acknowledge that such an act is immoral (which I do), but still desire to perform it, and also have the choice to have my brain rewired so I can empathize with ants. In that case, I would choose not to have my brain rewired. Call this "irrational" if you'd like, but if that's what you mean by rationality, I don't see why I should be rational, unless that's what I already desired anyways.

The thing which you are calling rationality seems to have a lot more to do with what I (and perhaps many others on Less Wrong) would call morality. Is your sticking point on this whole issue really the word "rational", or is it actually on the word "ideal"? Perhaps burger-choosing Jane is not "ideal"; perhaps she has made an immoral choice.

How would you define the word "morality", and how does it differ from "rationality"? I am not at all trying to attack your position; I am trying to understand it better.

Also, I now plan on reading your work The Hedonistic Imperative. Do you still endorse it?

Comment author: davidpearce 06 April 2013 12:34:25PM *  1 point [-]

notsonewuser, yes, "a (very) lossy compression", that's a good way of putting it - not just burger-eating Jane's lossy representation of the first-person perspective of a cow, but also her lossy representation of her pensioner namesake with atherosclerosis forty years hence. Insofar as Jane is ideally rational, she will take pains to offset such lossiness before acting.

Ants? Yes, you could indeed choose not to have your brain reconfigured so as faithfully to access their subjective panic and distress. Likewise, a touchy-feely super-empathiser can choose not to have her brain reconfigured so she better understands of the formal, structural features of the world - or what it means to be a good Bayesian rationalist. But insofar as you aspire to be an ideal rational agent, then you must aspire to maximum representational fidelity to the first-person and the first-third facts alike. This is a constraint on idealised rationality, not a plea for us to be more moral - although yes, the ethical implications may turn out to be profound.

The Hedonistic Imperative? Well, I wrote HI in 1995. The Abolitionist Project (2007) (http://www.abolitionist.com) is shorter, more up-to-date, and (I hope) more readable. Of course, you don't need to buy into my quirky ideas on ideal rationality or ethics to believe that we should use biotech and infotech to phase out the biology of suffering throughout the living world.

On a different note, I don't know who'll be around in London next month. But on May 11, there is a book launch of the Springer volume, "Singularity Hypotheses: A Scientific and Philosophical Assessment":

http://www.meetup.com/London-Futurists/events/110562132/?a=co1.1_grp&rv=co1.1

I'll be making the case for imminent biologically-based superintelligence. I trust there will be speakers to put the Kurzweilian and MIRI / lesswrong perspective. I fear a consensus may prove elusive. But Springer have a commissioned a second volume - perhaps to tie up any loose ends.

Comment author: IlyaShpitser 18 March 2013 12:45:16PM *  2 points [-]

Such is life. If we aspire to be ideal rational agents - both epistemically and instrumentally rational - then we'll impartially weigh the first-person and third-person facts alike.

What are you talking about? If you like utility functions, you don't argue about them (at least not on rationality grounds)! If I want to privilege this or that, I am not being irrational, I am at most possibly being a bastard.

Comment author: davidpearce 18 March 2013 12:59:18PM -1 points [-]

IlyaShpitser, is someone who steals from their own pension fund an even bigger bastard, as you put it? Or irrational? What's at stake here is which preferences or interests to include in a utility function.

Comment author: IlyaShpitser 18 March 2013 01:17:49PM *  3 points [-]

I don't follow you. What preferences I include is my business, not yours. You don't get to pass judgement on what is rational, rationality is just "accounting." We simply consult the math and check if the number is maximized. At most you can pass judgement on what is moral, but that is a complicated story.

Comment author: davidpearce 18 March 2013 01:38:39PM 0 points [-]

IlyaShpitser, you might perhaps briefly want to glance through the above discussion for some context [But don't feel obliged; life is short!] The nature of rationality is a controversial topic in the philosophy of science (cf. http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions). Let's just say if either epistemic or instrumental rationality were purely a question of maths, then the route to knowledge would be unimaginably easier.

Comment author: Desrtopa 18 March 2013 10:56:11PM 1 point [-]

Not necessarily if the math is really difficult. There are, after all, plenty of mathematical problems which have never been solved.

Comment author: davidpearce 22 March 2013 05:45:44PM 1 point [-]

True Desrtopa. But just as doing mathematics is harder when mathematicians can't agree on what constitutes a valid proof (cf. constructivists versus nonconstructivists), likewise formalising a normative account of ideal rational agency is harder where disagreement exists over the criteria of rationality.

Comment author: TobyBartels 14 October 2013 03:02:09AM 0 points [-]

True enough, but in this case the math is not difficult. It's only the application that people are arguing about.

Comment author: whowhowho 18 March 2013 01:59:23PM -2 points [-]

You are not going to ''do'' rationality unless you have a preference for it. And to have a preference for it is to have a preference for other things, like objectivity.

Comment author: IlyaShpitser 19 March 2013 01:12:05AM *  2 points [-]

Look, I am not sure exactly what you are saying here, but I think you might be saying that you can't have Clippy. Clippy worries less about assigning weight to first and third person facts, and more about the fact that various atom configurations aren't yet paperclips. I think Clippy is certainly logically possible. Is Clippy irrational? He's optimizing what he cares about..

I think maybe there is some sort of weird "rationality virtue ethics" hiding in this series of responses.

Comment author: khafra 18 March 2013 02:02:54PM 2 points [-]

Sure, it's only because appelatives like "bastard" imply a person with a constant identity through time that we call someone who steals from other people's pension funds a bastard, and from his own pension fund stupid or akratic. If we shrunk our view of identity to time-discrete agents making nanoeconomic transactions with future and past versions of themselves, we could call your premature pensioner a bastard; if we grew our view of identity to "all sentient beings," we could call someone who steals from others' pension funds stupid or akratic.

We could also call a left hand tossing a coin thrown by the right hand a thief; or divide up a single person into multiple, competing agents any number of other ways.

However, the choice of a assigning a consistent identity to each person is not arbitrary. It's fairly universal, and fairly well-motivated. Persons tend to be capable of replication, and capable of entering into enforceable contracts. Neither of the other agentic divisions--present/future self, left hand/right hand, or "all sentient beings"--share these characteristics. And these characteristics are vitally important, because agents that possess them can outcompete others that vie for the same resources; leaving the preferences of those other agents near-completely unsatisfied.

So, that's why LWers, with their pragmatic view toward rationality, aren't eager to embrace a definition of "rationality" that leaves its adherents in the dustbin of history unless everyone else embraces it at the same time.

Comment author: davidpearce 18 March 2013 02:45:57PM -1 points [-]

Pragmatic? khafra, possibly I interpreted the FAQ too literally. ["Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose."] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we're discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents - then a compelling case can be made for an impartial and objective weighing of preferences instead.

Comment author: khafra 18 March 2013 03:34:15PM *  2 points [-]

You're sticking pretty determinedly to "preferences" as something that can be weighed without considering the agent that holds/implements them. But this is prima facie not how preferences work--this is what I mean by "pragmatic." If we imagine an ordering over agents by their ability to accomplish their goals, instead of by "rationality," it's clear that:

  1. A preference held by no agents will only be satisfied by pure chance,

  2. A preference held only by the weakest agent will only be satisfied if it is compatible with the preferences of the agents above it, and

  3. By induction over the whole numbers, any agent's preferences will only be satisfied to the extent that they're compatible with the preferences of the agents above it.

As far as I can see, this leaves you with a trilemma:

  1. There is no possible ordering over agents by ability to accomplish goals.

  2. "Rationality" has negligible effect on ability to accomplish goals.

  3. There exists some Omega-agent above all others, whose goals include fulfilling the preferences of weaker agents.

Branch 3 is theism. You seem to be aiming for a position in between branch 1 and branch 2; switching from one position to the other whenever someone attacks the weaknesses of your current position.

Edit: Whoops, also one more, which is the position you may actually hold:

4. Being above a certain, unspecified position in the ordering necessarily entails preferring the preferences of weaker agents. It's obvious that not every agent has this quality of preferring the preferences of weaker agents; and I can't see any mechanism whereby that preference for the preferences of weaker agents would be forced upon every agent above a certain position in the ordering except for the Omega-agent. So I think that mechanism is the specific thing you need to argue for, if this is actually your position.

Comment author: Kawoomba 18 March 2013 06:16:12PM 2 points [-]

Well, 'khafra' (if that is even your name), there are a couple caveats I must point out.

  1. Consider two chipmunks living in the same forest, one of them mightier than the other (behold!). Each of them does his best to keep all the seeds to themselves (just like the typical LW'er). Yet it does not follow that the mightier chipmunk is able to preclude his rival from gathering some seeds, his advantage nonwithstanding.

  2. Consider that for all practical purposes we rarely act in a truly closed system. You are painting a zero-sum game, with the agents' habitat as an arena, an agent-eat-agent world in which truly following a single preference imposes on every aspect of the world. That's true for Clippy, not for chipmunks or individual humans. Apart from rare, typically artificially constructed environments (e.g. games), there was always a frontier to push - possibilities to evade other agents and find a niche that puts you beyond the grasp of other, mightier agents. The universe may be infinite or it mayn't, yet we don't really need to care about it, it's open enough for us. An Omega could preclude us from fulfilling any preferences at all, but just an agent that's "stronger" than us? Doubtful, unless we're introducing Omega in its more malicious variant, Clippy.

  3. Agents may have competing preferences, but what matters isn't centered on their ultima ratio maximal theoretical ability to enforce a specific preference, but just as much on their actual willingness to do so - which isis why the horn of the trilemma you state as "there is no possible ordering over agents by ability to accomplish goals" is too broad a statement. You may want some ice cream, but not at any cost.

As an example, Beau may wish to get some girl's number, but does not highly prioritize it. He has a higher chance of achieving that goal (let's assume the girl's number is an exclusive resource with a binary semaphore, so no sharing of her number allowed) than Mordog The Terrible, if they valued that preference equally. However, in practice if Beau didn't invest much effort at all, while Mordog listened to the girl for hours (investing significant time, since he values the number more highly), the weaker agent may yet prevail. Noone should ever read this example.

In conclusion, the ordering wouldn't be total, there would be partial (in the colloquial sense) orderings for certain subsets of agents, and the elements of the ordering would be tupels of (agent, which preference), without even taking into account temporal changes in power relations.

Comment author: khafra 18 March 2013 07:08:59PM 1 point [-]

I did try to make the structure of my argument compatible with a partial order; but you're right--if you take an atomic preference to be something like "a marginal acorn" or "this girl's number" instead of "the agent's entire utility function;" we'll need tuples.

As far as temporal changes go, we're either considering you an agent who bargains with Kawoomba-tomorrow for well-restedness vs. staying on the internet long into the night--in which case there are no temporal changes--or we're considering an agent to be the same over the entire span of its personhood, in which case it has a total getting-goals-accomplished rank; even if you can't be certain what that rank is until it terminates.

Comment author: Kawoomba 18 March 2013 07:29:44PM 1 point [-]

Can we even compare utilons across agents, i.e. how can we measure who fulfilled his utility function better, and preferably thus that an agent with a nearly empty utility function wouldn't win by default. Such a comparison would be needed to judge who fulfilled the sum of his/her/its preferences better, if we'd like to assign one single measure to such a complicated function. May not even be computable, unless in a CEV version.

Maybe a higher-up can chime in on that. What's the best way to summon one, say his name thrice or just cry "I need an adult"?

Comment author: davidpearce 18 March 2013 06:02:46PM -1 points [-]

The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might "pragmatically" choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the "preferences" of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners' Decision Theory FAQ, let's pragmatically order over agents by their ability to accomplish their goals, instead of by "rationality," And likewise today with captive nonhuman animals in our factory farms ? Hmmm....

Comment author: khafra 18 March 2013 06:46:52PM 2 points [-]

regardless of the power of the subject of experience who holds them.

This is the part that makes the mechanism necessary. The "subject of experience" is also the agent capable of replication, and capable of entering into enforceable contracts. If there were no selection pressure on agents, rationality wouldn't exist, there would be no reason for it. Since there is selection pressure on agents, they must shape themselves according to that pressure, or be replaced by replicators who will.

I don't believe the average non-slave-owning member of today's society is any more rational than the average 19th century plantation owner. It's plausible that a plantation owner who started trying to fulfill the preferences of everyone on his plantation, giving them the same weight as his own preferences, would end up with more of his preferences fulfilled than the ones who simply tried to maximize cotton production--but that's because humans are not naturally cotton maximizers, and humans do have a fairly strong drive to fulfill the preferences of other humans. ' But that's because we're humans, not because we're rational agents.

Comment author: davidpearce 19 March 2013 05:49:51AM 0 points [-]

khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let's assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free - albeit not a preference introspectively accessible to our two agents in question. Jill's conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger - but she allows her introspectively accessible but weaker preference to trump what she can't directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.

Comment author: khafra 19 March 2013 04:06:28PM 1 point [-]

On your account, who in a slaveholding society is the ideal rational agent?

The question is misleading, because humans have a very complicated set of goals which include a measure of egalitarianism. But the complexity of our goals is not a necessary component of our intelligence about fulfilling them, as far as we can tell. We could be just as clever and sophisticated about reaching much simpler goals.

let's assume they are both meta-ethical anti-realists.

Don't you have to be a moral realist to compare utilities across different agents?

her slaves have no mechanism to satisfy their stronger preference for freedom.

This is not the mechanism which I've been saying is necessary. The necessary mechanism is one which will connect a preference to the planning algorithms of a particular agent. For humans, that mechanism is natural selection, including kin selection; that's what gave us the various ways in which we care about the preferences of others. For a designed-from-scratch agent like a paperclip maximizer, there is--by stipulation--no such mechanism.