Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jmh 05 September 2017 11:36:48AM 0 points [-]

Would it be correct to define selfish utility as sociopathic?

Comment author: Stuart_Armstrong 05 September 2017 06:57:20PM *  0 points [-]

The problem with selfish utility, is that even selfish agents are assumed to care about themselves at different moments in time. In a world where copying happens, this is under defined, so selfish has multiple possible definitions.

Comment author: turchin 03 September 2017 12:25:37PM *  0 points [-]

I agree with this: "yes. You are both. And you currently control the actions of both. It is not meaningful to ask 'which' one you are."

But have the following problem: what if the best course of action for me depends on am I Boltzmann brain or real person? It looks like I still have to update according to which group is larger: real me of Boltzmann brain me.

It also looks like we use "all decision computation processes like mine process" as something like what I called before "natural reference class". And in case of DA it is all beings who thinks about DA.

Comment author: Stuart_Armstrong 03 September 2017 01:44:33PM 0 points [-]

I'll deal with the non-selfish case, which is much easier.

In that case, Earth you and Boltzmann brain you have the same objectives. And most of the time, these objectives make "Boltzmann brain you" irrelevant, as their actions have so consequences (one exception could be "ensure everyone has a life that is on average happy, in which case Earth you should try and always be happy, for the sake of the Boltzmann brain yous). So most of the time, you can just ignore Boltzmann brains in ADT.

Yes, that is a natural reference class in ADT (note that it's a reference class of agents-moments making decisions, not of agents in general; it's possible that someone else is in your reference class for one decision, but not for another).

But "all beings who think about DA" is not a natural reference class, as you can see when you start questioning it ("to what extent do they think about DA? Under what name? Does it matter what conclusions they draw?...)

Comment author: Manfred 03 September 2017 01:06:31AM *  0 points [-]

That's not quite what I was talking about, but I managed to resolve my question to my own satisfaction anyhow. The problem of conditionalization can be worked around fairly easily.

Suppose that there is 50% ehance of there being a boltzmann brain copy of you

Actually, the probability that you should assign to there being a copy of you is not defined under your system - otherwise you'd be able to conceive of a solution to the sleeping beauty problem - the entire schtick is that Sleeping Beauty is not merely ignorant about whether another copy of her exists, but that it is supposedly a bad question.

Hm, okay, I think this might cause trouble in a different way that I was originally thinking of. Because all sorts of things are possibilities, and it's not obvious to me how ADT is able to treat reasonable anthropic possibilities different from astronomically-unlikely ones, if it throws out any measure of unlikeliness. You might try to resolve this by putting in some "outside perspective" probabilities, e.g. that an outside observer in our universe would see me as normal most of the time and me as a Boltzmann brain less of the time, but this requires making drastic assumptions about what the "outside observer" is actually outside, observing. If I really was a Boltzmann brain in a thermal universe, an outside observer would think I was more likely to be a Boltzmann brain. So postulating an outside perspective is just an awkward way of sneaking in probabilities gained in a different way.

This seems to leave the option of really treating all apparent possibilities similarly. But then the benefit of good actions in the real world gets drowned out by all the noise from all the unlikely possibilities - after all, for every action, one can construct a possibility where it's both good and bad. If there's no way to break ties between possibilities, no ties get broken.

Comment author: Stuart_Armstrong 03 September 2017 09:47:21AM 0 points [-]

Actually, the probability that you should assign to there being a copy of you is not defined under your system - otherwise you'd be able to conceive of a solution to the sleeping beauty problem

Non-anthropic ("outside observer") probabilities are well defined in the sleeping beauty problem - the probability of heads/tails is exactly 1/2 (most of the time, you can think of these as the SSA probabilities over universes - the only difference being in universes where you don't exist at all). You can use a universal prior or whatever you prefer; the "outside observer" doesn't need to observe anything or be present in any way.

I note that you need these initial probabilities in order for SSA or SIA to make any sense at all (pre-updating on your existence), so I have no qualms claiming them for ADT as well.

Comment author: turchin 02 September 2017 08:56:41PM 0 points [-]

I still think that this explanation fails the criteria "explain as if I am 5". I copy below my comment, in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation):

Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spent billions of billions on large prevention project. Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:

1) Start partying, trying to get as much utility as possible before the inevitable catastrophe. 2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.

If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.

If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.

The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.

Comment author: Stuart_Armstrong 02 September 2017 09:28:03PM 0 points [-]

in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation)

I do not think there is a sensible ADT DA that can be constructed for reasonable civilizations. In ADT, only weird utilities like average utilitarians have a DA.

SSA has a DA. ADT has a SSAish like agent, which is the average utilitarian. Therefore, ADT must have a DA. I constructed it. And it turns out the ADT DA via this has no real doom aspect to it; it has behaviour that looks like avoiding doom, but only for agents with strange preferences. ADT does not have a DA with teeth.

Simplified Anthropic Doomsday

1 Stuart_Armstrong 02 September 2017 08:37PM

Here is a simplified version of the Doomsday argument in Anthropic decision theory, to get easier intuitions.

Assume a single agent A exists, an average utilitarian, with utility linear in money. Their species survives with 50% probability; denote this event by S. If the species survives, there will be 100 people total; otherwise the average utilitarian is the only one of its kind. An independent coin lands heads with 50% probability; denote this event by H.

Agent A must price a coupon CS that pays out €1 on S, and a coupon CH that pays out €1 on H. The coupon CS pays out only on S, thus the reward only exists in a world where there are a hundred people, thus if S happens, the coupon CS is worth (€1)/100. Hence its expected worth is (€1)/200=(€2)/400.

But H is independent of S, so (H,S) and (H,¬S) both have probability 25%. In (H,S), there are a hundred people, so CH is worth (€1)/100. In (H,¬S), there is one person, so CH is worth (€1)/1=€1. Thus the expected value of CH is (€1)/4+(€1)/400 = (€101)/400. This is more than 50 times the value of CS.

Note that C¬S, the coupon that pays out on doom, has an even higher expected value of (€1)/2=(€200)/400.

So, H and S have identical probability, but A assigns CS and CH different expected utilities, with a higher value to CH, simply because S is correlated with survival and H is independent of it (and A assigns an ever higher value to C¬S, which is anti-correlated with survival). This is a phrasing of the Doomsday Argument in ADT.

Comment author: turchin 02 September 2017 03:26:29PM *  1 point [-]

I created a practical example, which demonstrates me correctness of your point of view as I understand it.

Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spends billions of billions on large prevention project.

Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:

1) Start partying, trying to get as much utility as possible before the inevitable catastrophe. 2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.

If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.

If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.

The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.

Is this example correct from the point of ADT?

Comment author: Stuart_Armstrong 02 September 2017 06:32:04PM *  1 point [-]

This is a good illustration of anthropic reasoning, but it's an illustration of the presumptuous philosopher, not of the DA (though they are symmetric in a sense). Here we have people saying "I expect to fail, but I will do it anyway because I hope others will succeed, and we all make the same decision". Hence it's the total utilitarian (who is the "SIAish" agent) who is acting against what seems to be the objective probabilities.

http://lesswrong.com/lw/8bw/anthropic_decision_theory_vi_applying_adt_to/

Comment author: Wei_Dai 01 September 2017 01:10:48PM 5 points [-]

When I was involved in crypto there were forums that both published academics and unpublished hobbyists participated in, and took each other seriously. If this isn't true in a field, it makes me doubt that intellectual progress is still the highest priority in that field. If I were a professional philosopher working in anthropic reasoning, I don't see how I can justify not taking a paper about anthropic reasoning seriously unless it passed peer review by anonymous reviewers whose ideas and interests may be very different from my own. How many of those papers can I possibly come across per year, that I'd justifiably need to outsource my judgment about them to unknown peers?

(I think peer review does have a legitimate purpose in measuring people's research productivity. University admins have to count something to determine who to hire and promote, and number of papers that pass peer review is perhaps one of the best measure we have. And it can also help outsiders to know who can be trusted as experts in a field, which is what I was thinking of by "prestige". But there's no reason for people who are already experts in a field to rely on it instead of their own judgments.)

Comment author: Stuart_Armstrong 02 September 2017 06:27:13PM 2 points [-]

If I were a professional philosopher working in anthropic reasoning, I don't see how I can justify not taking a paper about anthropic reasoning seriously

But there are no/few philosophers working in "anthropic reasoning" - there are many working in "anthropic probability", to which my paper is an interesting irrelevance. it's essentially asking and answering the wrong question, while claiming that their own question is meaningless (and doing so without quoting some of the probability/decision theory stuff which might back up the "anthropic probabilities don't exist/matter" claim from first principles).

I expected the paper would get published, but I always knew it was a bit of a challenge, because it didn't fit inside the right silos. And the main problem with academia here is that people tend to stay in their silos.

Comment author: Wei_Dai 01 September 2017 05:10:58PM *  0 points [-]

Lots of places attract cranks and semi-serious people, including the crypto forums I mentioned, LW, and everything-list which was a mailing list I created to discuss anthropic reasoning as one of the main topics, and they're not that hard to deal with. Basically it doesn't take a lot of effort to detect cranks and previously addressed ideas, and everyone can ignore the cranks and the more experienced hobbyists can educate the less experienced hobbyists.

EDIT: For anyone reading this, the discussion continues here.

Comment author: Stuart_Armstrong 02 September 2017 06:23:45PM 0 points [-]

Basically it doesn't take a lot of effort to detect cranks and previously addressed ideas

This is news to me. Encouraging news.

Comment author: entirelyuseless 02 September 2017 02:16:43PM 0 points [-]

That answer might be fine for copies, but not for situations where copies are involved in no way, like the Doomsday Argument. It is nonsense to say that you are both early and late in the series of human beings.

Comment author: Stuart_Armstrong 02 September 2017 06:22:39PM 0 points [-]

Copies are involved in DA. To use anthropics, you have to "update on your position on your reference class" (or some similar construction). At that very moment, just before you update, you can be any person at all - if not, you can't update. You can be anyone equally.

(of course, nobody really "updates" that way, because people first realise who they are, then long after that learn about the DA. But if SSA people are allowed to "update" like that, I'm allowed to look at the hypothetical before such an update)

Comment author: Manfred 01 September 2017 03:58:46PM 0 points [-]

Since we are in the real world, it is a possibility that there is a copy of me, e.g. as a boltzmann brain, or a copy of the simulation I'm in.

Does your refusal to assign probabilities to these situations infect everyday life? Doesn't betting on a coin flip require conditioning on whether I'm a boltzmann brain, or am in a simulation that replaces coins with potatoes if I flip them? You seem to be giving up on probabilities altogether.

Comment author: Stuart_Armstrong 02 September 2017 11:10:09AM *  1 point [-]

Suppose that there is 50% ehance of there being a boltzmann brain copy of you - that's fine, that is a respectable probability. What ADT ignores are questions like "am I the boltzmann brain or the real me on Earth?" The answer to that is "yes. You are both. And you currently control the actions of both. It is not meaningful to ask 'which' one you are."

Give me a preference and a decision, and that I can answer, though. So the answer to "what is the probability of being which one" is "what do you need to know this for?"

View more: Next