Unconditionally Convergent Expected Utility

10 DanielLC 11 June 2011 08:00PM

Expected utility can be expressed as the sum ΣP(Xn)U(Xn). Suppose P(Xn) = 2-n, and U(Xn) = (-2)n/n. Then expected utility = Σ2-n(-2)n/n = Σ(-1)n/n = -1+1/2-1/3+1/4-... = -ln(2). Except there's no obvious order to add it. You could just as well say it's -1+1/2+1/4+1/6+1/8-1/3+1/10+1/12+1/14+1/16-1/5+... = 0. The sum depends on the order you add it. This is known as conditional convergence.

This is clearly something we want to avoid. Suppose my priors have an unconditionally convergent expected utility. This would mean that ΣP(Xn)|U(Xn)| converges. Now suppose I observe evidence Y. ΣP(Xn|Y)|U(Xn)| = Σ|U(Xn)|P(Xn∩Y)/P(Y) ≤ Σ|U(Xn)|P(Xn)/P(Y) = 1/P(Y)·ΣP(Xn)|U(Xn)|. As long as P(Y) is nonzero, this must also converge.

If my prior expected utility is unconditionally convergent, then given any finite amount of evidence, so is my posterior.

This means I only have to come up with a nice prior, and I'll never have to worry about evidence braking expected utility.

I suspect that this can be made even more powerful, and given any amount of evidence, finite or otherwise, I will almost surely have an unconditionally convergent posterior. Anyone want to prove it?

Now let's look at Pascal's Mugging. The problem here seems to be that someone could very easily give you an arbitrarily powerful threat. However, in order for expected utility to converge unconditionally, either carrying out the threat must get unlikely faster than the disutility increases, or the probability of the threat itself must get unlikely that fast. In other words, either someone threatening 3^^^3 people is so unlikely to carry it out to make it non-threatening, or the threat itself must be so difficult to make that you don't have to worry about it.

The Difference Between Classical, Evidential, and Timeless Decision Theories

4 DanielLC 26 March 2011 09:27PM

I couldn't find any concise explanation of what the decision theories are. Here's mine:

A Causal Decision Theorist wins, given what's happened so far.

An Evidential Decision Theorist wins, given what they know.

A Timeless Decision Theorist wins a priori.

To explain what I mean, here are two interesting problems. In each of them, two of the decision theories give one choice, and the third gives the other.

In Newcomb's problem and you separate people into groups based on what happened before the experiment, i.e. whether or not Box A has money, CDT will be at least as successful in each group as any other strategy, and notably more successful than EDT and TDT. If you separate it into what's known, there's only one group, since everybody has the same information. EDT is at least as successful as any other strategy, and notably more successful than CDT. If you don't separate it at all, TDT will be at least as successful as any other strategy, and notably more successful than EDT.

In Parfit's hitchhiker, when it comes time to pay the driver, if you split into groups based on what happened before the experiment, i.e. whether or not one has been picked up, CDT will be at least as successful in each group as any other strategy, and notably more successful than TDT. If you split based on what's given, which is again whether or not one has been picked up, EDT will be at least as successful in each group as any other strategy, and notably more successful than TDT. If you don't separate at all, TDT will be at least as successful as any other strategy, and notably more successful than CDT and EDT.

There's one thing I'm not sure about. How does Updateless Decision Theory compare?

Sleeping Beauty

-3 DanielLC 01 February 2011 10:13PM

Someone comes up to you and tells you he flipped ten coins for ten people. They were fair coins, but only three came up heads. What is the probability yours was heads?

There are three people of ten who got heads. There is a 30% chance that you're one of those three, right?

Now take the sleeping beauty paradox. A coin is flipped. If it lands on heads, the subject is woken twice. If it lands on tails, the subject is woken once. For simplicity, assume it happens exactly once, and there are one trillion person-days. You wake up groggy in the morning, and take a second to remember who you are.

If the coin landed on tails, that would mean that there is a one in a trillion chance that you will remember that you're the subject. If it was heads, it would be two in a trillion. As such,  if you do remember being the subject, the probability that it's heads is P(H|U)=P(U|H)*P(H)/[P(U|H)+P(U|T)] = (2/trillion)*(1/2)/[(2/trillion+1/trillion)] =2/3, where H is coin lands on heads, T is coin lands on tails, and U is you are the subject.

Technically, it would be slightly less than 2/3, since there will be one more person-day if the coin lands on heads.

Varying amounts of subjective experience

-7 DanielLC 16 December 2010 03:02AM

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality. This is an argument as to why that may be the case.

If you're moving away from Earth at 87% of the speed of light, time dilation would make it look like time on Earth is passing half as fast. From your point of reference, everyone will live twice as long. This obviously won't change the number of life years they live. You can't double the amount of good in the world just by moving at 87% the speed of light. It's possible that there's just a preferred point of reference, and everything is based on people's speed relative to that, but I doubt it.

No consider if their brains were slowed down a different way. Suppose you uploaded someone, and made the simulation run at half speed. Would they experience a life twice as long? This seems to be just slowing it down a different way. I doubt it would change the total amount experienced.

If that's true, it means that sentience isn't something you either have or don't have. There can be varying amounts of it. Also, someone whose brain has been slowed down would be less intelligent by most measures, so this is some evidence that subjective experience correlates with intelligence.

Edit: replaced "sentience" with the more accurate "subjective experience".

It's Not About Efficiency

1 DanielLC 06 December 2010 04:12AM

When I explain the importance of donating only to the right charity, I've been told that it's not about efficiency. This is completely correct.

Imagine a paperclip company. They care only about making paperclips. They will do anything within their power to improve efficiency, but they don't care about efficiency. They care about making paperclips. Efficiency is just a measure of how well they're accomplishing their goal. You don't try to be efficient because you want to be efficient. You try to be efficient because you want something.

When I try to help people, the same principle applies. I couldn't care less about a charity's efficiency. I care about how much they help people. Efficiency is just a measure of how well they accomplish that goal.

Evidential Decision Theory and Mass Mind Control

-3 DanielLC 23 October 2010 11:26PM

Required Reading: Evidential Decision Theory

Let me begin with something similar to Newcomb's Paradox. You're not the guy choosing whether or not to take both boxes. You're the guy who predicts. You're not actually prescient. You can only make an educated guess.

You watch the first person play. Let's say they pick one box. You know they're not an ordinary person. They're a lot more philosophical than normal. But that doesn't mean that the knowledge of what they choose is completely useless later on. The later people might be just as weird. Or they might be normal, but they're not completely independent of this outlier. You can use his decision to help predict theirs, if only by a little. What's more, this still works if you're reading through archives and trying to "predict" the decisions people have already made in earlier trials.

The decision of the player choosing the box affects whether or not the predictor will predict that later, or earlier, people will take the box. According to EDT, one should act in the way that results in the most evidence for what one wants. Since the predictor is completely rational, this means that the player choosing the box effectively changes decisions other people make, or actually changes depending on your interpretation of EDT. One can even affect people's decisions in the past, provided that one doesn't know what they were.

In short, the decisions you make affect the decisions other people will make and have made. I'm not sure how much, but there have probably been 50 to 100 billion people. And that's not including the people who haven't been born yet. Even if you only change one in a thousand decisions, that's at least 50 million people.

Like I said: mass mind control. Use this power for good.

Bayesian Doomsday Argument

-5 DanielLC 17 October 2010 10:14PM

First, if you don't already know it, Frequentist Doomsday Argument:

There's some number of total humans. There's a 95% chance that you come after the last 5%. There's been about 60 to 120 billion people so far, so there's a 95% chance that the total will be less than 1.2 to 2.4 trillion.

I've modified it to be Bayesian.

First, find the priors:

Do you think it's possible that the total number of sentients that have ever lived or will ever live is less than a googolplex? I'm not asking if you're certain, or even if you think it's likely. Is it more likely than one in infinity? I think it is too. This means that the prior must be normalizable.

If we take P(T=n) ∝ 1/n, where T is the total number of people, it can't be normalized, as 1/1 + 1/2 + 1/3 + ... is an infinite sum. If it decreases faster, it can at least be normalized. As such, we can use 1/n as an upper limit.

Of course, that's just the limit of the upper tail, so maybe that's not a very good argument. Here's another one:

We're not so much dealing with lives as life-years. Year is a pretty arbitrary measurement, so we'd expect the distribution to be pretty close for the majority of it if we used, say, days instead. This would require the 1/n distribution.

After that,

T = total number of people

U = number you are

P(T=n) ∝ 1/n
U = m
P(U=m|T=n) ∝ 1/n
P(T=n|U=m) = P(U=m|T=n) * P(T=n) / P(U=m)
= (1/n^2) / P(U=m)
P(T>n|U=m) = ∫P(T=n|U=m)dn
= (1/n) / P(U=m)
And to normalize:
P(T>m|U=m) = 1
= (1/m) / P(U=m)
m = 1/P(U=m)
P(T>n|U=m) = (1/n)*m
P(T>n|U=m) = m/n

So, the probability of there being a total of 1 trillion people total if there's been 100 billion so far is 1/10.

There's still a few issues with this. It assumes P(U=m|T=n) ∝ 1/n. This seems like it makes sense. If there's a million people, there's a one-in-a-million chance of being the 268,547th. But if there's also a trillion sentient animals, the chance of being the nth person won't change that much between a million and a billion people. There's a few ways I can amend this.

First: a = number of sentient animals. P(U=m|T=n) ∝ 1/(a+n). This would make the end result P(T>n|U=m) = (m+a)/(n+a).

Second: Just replace every mention of people with sentients.

Third: Take this as a prediction of the number of sentients who aren't humans who have lived so far.

The first would work well if we can find the number of sentient animals without knowing how many humans there will be. Assuming we don't take the time to terreform every planet we come across, this should work okay.

The second would work well if we did tereform every planet we came across.

The third seems a bit wierd. It gives a smaller answer than the other two. It gives a smaller answer than what you'd expect for animals alone. It does this because it combines it for a Doomsday Argument against animals being sentient. You can work that out separately. Just say T is the total number of humans, and U is the total number of animals. Unfortunately, you have to know the total number of humans to work out how many animals are sentient, and vice versa. As such, the combined argument may be more useful. It won't tell you how many of the denizens of planets we colonise will be animals, but I don't think it's actually possible to tell that.

One more thing, you have more information. You have a lifetime of evidence, some of which can be used in these predictions. The lifetime of humanity isn't obvious. We might make it to the heat death of the universe, or we might just kill each other off in a nuclear or biological war in a few decades. We also might be annihilated by a paperclipper somewhere in between. As such, I don't think the evidence that way is very strong.

The evidence for animals is stronger. Emotions aren't exclusively intelligent. It doesn't seem animals would have to be that intelligent to be sentient. Even so, how sure can you really be. This is much more subjective than the doomsday part, and the evidence against their sentience is staggering. I think so anyway, how many animals are there at different levels of intelligence?

Also, there's the priors for total human population so far. I've read estimates vary between 60 and 120 billion. I don't think a factor of two really matters too much for this discussion.

So, what can we use for these priors?

Another issue is that this is for all of space and time, not just Earth.

Consider that you're the mth person (or sentient) from the lineage of a given planet. l(m) is the number of planets with a lineage of at least m people. N is the total number of people ever, n is the number on the average planet, and p is the number of planets.

l(m)/N
=l(m)/(n*p)
=(l(m)/p)/n

l(m)/p is the portion of planets that made it this far. This increases with n, so this weakens my argument, but only to a limited extent. I'm not sure what that is, though. Instinct is that l(m)/p is 50% when m=n, but the mean is not the median. I'd expect a left-skew, which would make l(m)/p much lower than that. Even so, if you placed it at 0.01%, this would mean that it's a thousand times less likely at that value. This argument still takes it down orders of magnitude than what you'd think, so that's not really that significant.

Also, a back-of-the-envolope calculation:

Assume, against all odds, there are a trillion times as many sentient animals as humans, and we happen to be the humans. Also, assume humans only increase their own numbers, and they're at the top percentile for the populations you'd expect. Also, assume 100 billion humans so far.

n = 1,000,000,000,000 * 100,000,000,000 * 100

n = 10^12 * 10^11 * 10^2

n = 10^25

Here's more what I'd expect:

Humanity eventually puts up a satilite to collect solar energy. Once they do one, they might as well do another, until they have a dyson swarm. Assume 1% efficiency. Also, assume humans still use their whole bodies instead of being a brain in a vat. Finally, assume they get fed with 0.1% efficiency. And assume an 80-year lifetime.

n = solar luminosity * 1% / power of a human * 0.1% * lifetime of Sun / lifetime of human

n = 4 * 10^26 Watts * 0.01 / 100 Watts * 0.001 * 5,000,000,000 years / 80 years

n = 2.5 * 10^27

By the way, the value I used for power of a human is after the inefficiencies of digesting.

Even with assumptions that extreme, we couldn't use this planet to it's full potential. Granted, that requires mining pretty much the whole planet, but with a dyson sphere you can do that in a week, or two years with the efficiency I gave.

It actually works out to about 150 tons of Earth per person. How much do you need to get the elements to make a person?

Incidentally, I rewrote the article, so don't be surprised if some of the comments don't make sense.

View more: Prev | Next