Why would not giving him $5 make it more likely that people would die, as opposed to less likely? The two would seem to cancel out. It's the same old "what if we are living in a simulation?" argument- it is, at least, possible that me hitting the sequence of letters "QWERTYUIOP" leads to a near-infinity of death and suffering in the "real world", due to AGI overlords with wacky programming. Yet I do not refrain from hitting those letters, because there's no entanglement which drives the probabilities in that direction as opposed to some other random direction; my actions do not alter the expected future state of the universe. You could just as easily wind up saving lives as killing people.
Because he said so, and people tend to be true to their word more often than dictated by chance.
They claim to not be a human. They're still a person, in the sense of a sapient being. As a larger class, you'd expect lower correlation, but it would still be above zero.
Let's say you're a sociopath, that is, the only factors in your utility function are your own personal security and happiness.
Can we use the less controversial term 'economist'?
Very interesting thought experiment!
One place where it might fall down is that our disutility for causing deaths is probably not linear in the number of deaths, just as our utility for money flattens out as the amount gets large. In fact, I could imagine that its value is connected to our ability to intuitively grasp the numbers involved. The disutility might flatten out really quickly so that the disutility of causing the death of 3^^^^3 people, while large, is still small enough that the small probabilities from the induction are not overwhelmed by it.
People say the fact that there are many gods neutralizes Pascal’s wager - but I don't understand that at all. It seems to be a total non sequetor. Sure, it opens the door to other wagers being valid, but that is a different issue.
Lets say I have a simple game against you where, if I choose 1 I win a lotto ticket and if I choose 0 I loose. There is also a number of other games tables around the room with people winning or not winning lotto tickets. If I want to win the lotto, what number should I pick?
Also I don't tink there is a fundimental issue with havi...
This is an instance of the general problem of attaching a probability to matrix scenarios. And you can pascal-mug yourself, without anyone showing up to assert or demand anything - just think: what if things are set up so that whether I do, or do not do, something, determines whether those 3^^^^3 people will be created and destroyed? It's just as possible as the situation in which a messenger from Outside shows up and tells you so.
The obvious way to attach probabilities to matrix scenarios is to have a unified notion of possible world capacious enough to e...
Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.
Andrew, if we're in a simulation, the world containing the simulation could be able to support 3^^^^3 people. If you knew (magically) that it couldn't, you could substitute something on the order of 10^50, which is vastly less forceful but may still lead to the same problem.
Andrew and Steve, you could replace "kill 3^^^^3 people" with "create 3^^^^3 units of disutility according to your utility function". (I respectfully suggest that we all start using this form of the problem.)
Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.
IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve sug...
create 3^^^^3 units of disutility according to your utility function
For all X:
If your utility function assigns values to outcomes that differ by a factor of X, then you are vulnerable to becoming a fanatic who banks on scenarios that only occur with probability 1/X. As simple as that.
If you think that banking on scenarios that only occur with probability 1/X is silly, then you have implicitly revealed that your utility function only assigns values in the range [1,Y], where Y<X, and where 1 is the lowest utility you assign.
Mitchell, it doesn't seem to me like any sort of accurate many-worlds probability calculation would give you a probability anywhere near low enough to cancel out 3^^^^3. Would you disagree? It seems like there's something else going on in our intuitions. (Specifically, our intuitions that an good FAI would need to agree with us on this problem.)
Sorry, the first link was supposed to be to Absence of Evidence is Evidence of Absence.
Mitchell, I don't see how you can Pascal-mug yourself. Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same. But the mugger's threat is a shred of Bayesian evidence that you have to take into account, and when you do, it massively tips the expected utility balance. Your suggested solution does seem right but utterly intractable.
I don't think the QWERTYUIOP thing is literally zero Bayesian evidence either. Suppose the thought of that particular possibility was manually inserted into your mind by the simulation operator.
Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.
Nothing could possibly be that weak.
Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.
Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?
OK, let's try this one more time:
To put it another way, conditional on this nonexistent person having these nonexistent powers, why should you be so sure that he's telling the truth? Perhaps you'll only get what you want by not giving him the $5. To put it mathematically, you're computing pX, where p is the probability and ...
I have to go with Tom McGabe on this one; This is just a restatement of the core problem of epistemology. It's not unique to AI, either.
3. Even if you don't accept 1 and 2 above, there's no reason to expect that the person is telling the truth. He might kill the people even if you give him the $5, or conversely he might not kill them even if you don't give him the $5.
But if a Bayesian AI actually calculates these probabilities by assessing their Kolmogorov complexity - or any other technique you like, for that matter - without desiring that they come out exactly equal, can you rely on them coming out exactly equal? If not, an expected utility differential of 2 to the negative googolplex times 3^^^^3 still equals 3^^^^3, so whatever tiny probability differences exist will dominate all calculations based on what we think of as the "real world" (the mainline of probability with no wizards).
if you have the imagination to imagine X to be super-huge, you should be able to have the imagination to imagine p to be super-small
But we can't just set the probability to anything we like. We have to calculate it, and Kolmogorov complexity, the standard accepted method, will not be anywhere near that super-small.
Addendum: In computational terms, you can't avoid using a 'hack'. Maybe not the hack you described, but something, somewhere has to be hard-coded. How else would you avoid solipsism?
This case seems to suggest the existence of new interesting rationality constraints, which would go into choosing rational probabilities and utilities. It might be worth working out what constraints one would have to impose to make an agent immune to such a mugging.
Eliezer,
OK, one more try. First, you're picking 3^^^^3 out of the air, so I don't see why you can't pick 1/3^^^^3 out of the air also. You're saying that your priors have to come from some rigorous procedure but your utility comes from simply transcribing what some dude says to you. Second, even if for some reason you really want to work with the utility of 3^^^^3, there's no good reason for you not to consider the possibility that it's really -3^^^^3, and so you should be doing the opposite. The issue is not that two huge numbers will exactly cancel o...
pdf23ds, under certain straightforward physical assumptions, 3^^^^3 people wouldn't even fit in anyone's future light-cone, in which case the probability is literally zero. So the assumption that our apparent physics is the physics of the real world too, really could serve to decide this question. The only problem is that that assumption itself is not very reasonable.
Lacking for the moment a rational way to delimit the range of possible worlds, one can utilize what I'll call a Chalmers prior, which simply specifies directly how much time you will spend thi...
Well... I think we act diffrently from the AI because we not only know Pascals Mugging, we know that it is known. I don't see why an AI could not know the knowledge of it, though, but you do not seem to consider that, which might simply show that it is not relevant, as you, er, seem to have given this some thought...
Konrad: In computational terms, you can't avoid using a 'hack'. Maybe not the hack you described, but something, somewhere has to be hard-coded.
Well, yes. The alternative to code is not solipsism, but a rock, and even a rock can be viewed as being hard-coded as a rock. But we would prefer that the code be elegant and make sense, rather than using a local patch to fix specific problems as they come to mind, because the latter approach is guaranteed to fail if the AI becomes more powerful than you and refuses to be patched.
Andrew: You're saying that your...
To solve this problem, the AI would need to calculate the probability of the claim being true, for which it would need to calculate the probability of 3^^^^3 people even existing. Given what it knows about the origins and rate of reproduction of humans, wouldn't the probability of 3^^^^3 people even existing be approximately 1/3^^^^3? It's as you said, multiply or divide it by the number of characters in the bible, it's still nearly the same damned incomprehensably large number. Unless you are willing to argue that there are some bizarre properties of t...
Here's one for you: Lets assume for arguement's sake that "humans" could include human cosciousnesses, not just breathing humans. Then, if a universe with 3^^^^3 "humans" actually existed, what would be the odds that they were NOT all copies of the same parasitic consciousness?
Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).
Eliezer Sorry to say (because it makes me sound callous), but if someone can and is willing to create and then destroy 3^^^3 people for less than $5, then there is no value in life, and definitely no moral structure to the universe. The creation and destruction of 3^^^3 people (or more) is probably happening all the time. Therefore the AI is safe declining the wager on purely selfish grounds.
Eliezer, I'd like to take a stab at the internal criterion question. One differerence between me and the program you describe is that I have a hoped for future. Say "I'd like to play golf on Wednesday." Now, I could calculate the odds of Wednesday not actually arriving (nuclear war,asteroid impact...), or me not being alive to see it (sudden heartattack...), and I would get an answer greater than zero. Why don't I operate on those non-zero probabilities? (The other difference between me and the program you describe) I think it has to do with ...
IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve suggests). The problem disappears if your upper bound is low enough. Hopefully any realistic utility function has such a low upper bound, but it'd still be a good idea to solve the general problem.
Nick, please see my blog (just click on my name). I have a post about this.
"Let the differential be negative. Same problem. If the differential is not zero, the AI will exhibit unreasonable behavior. If the AI literally thinks in Solomonoff induction (as I have described), it won't want the differential to be zero, it will just compute it."
How can a computation arrive at a nonzero differential, starting with zero data? If I ask a rational AI to calculate the probability of me typing "QWERTYUIOP" saving 3^^^^3 human lives, it knows literally nothing about the causal interactions between me and those lives, because they are totally unobservable.
GeniusNZ, you have to consider not only all proposed gods, but all possible gods and reward/punishment structures. Since the number and range of conceivable divine rewards and punishments is infinite for each action, the incentives are all equally balanced, and thus give you no reason to prefer one action over another.
Ultimately, I think Tom McCabe is right -- the truth of a proposition depends in part on its meaningfulness.
What is the probability that the sun will rise tomorrow? Nearly 1, if you're thinking of dawns. Nearly 0, if you're thinking of Cop...
I generally share Tom McCabe's conclusion, that is, that they exactly cancel out because a symmetry has not been broken. The reversed hypothesis has the same complexity as the original hypothesis, and the same evidence supporting it. No differential entanglement. However, I think that this problem is worth attention because a) so many people who normally agree disagree here, and b) I suspect that the problem is related to normal utilitarianism with no discounting and an unbounded future. Of course, we already have some solutions in that case and we sho...
Benquo, replace "kill 3^^^^3 people" with "create 3^^^^3 disutility units" and the problem reappears.
Michael, do you really think the mugger's statement is zero evidence?
It seems to me that the cancellation is an artifact of the particular example, and that it would be easy to come up with an example in which the cancellation does not occur. For example, maybe you have previous experience with the mugger. He has mugged you before about minor things and sometimes you have paid him and sometimes not. In all cases he has been true to his word. This would seem to tip the probabilities at least slightly in favor of him being truthful about his current much larger threat.
You could always just give up being a consequentialist and ontologically refuse to give in to the demands of anyone taking part in a Pascal mugging because consistently doing so would lead to the breakdown of society.
Re: "However clever your algorithm, at that level, something's bound to confuse it. Gimme FAI with checks and balances every time."
I agree that a mature Friendly Artificial Intelligence should defer to something like humanity's volition.
However, before it can figure out what humanity's volition is and how to accomplish it, an FAI first needs to:
If ...
Rolf: I agree with everything you just said, especially the bit about patches and hacks. I just wouldn't be happy having a FAI's sanity dependent on any single part of it's design, no matter how perfect and elegant looking, or provably safe on paper, or demonstrably safe in our experiments.
However clever your algorithm, at that level, something's bound to confuse it.
Odd, I've been reading moral paradoxes for many years and my brain never crashed once, nor have I turned evil. I've been confused but never catastrophically so (though I have to admit my younger self came close). My algorithm must be "beyond clever".
That's a remarkable level of resilience for a brain design which is, speaking professionally, a damn ugly mess. If I can't do aspire to do at least that well, I may as well hang up my shingle and move in with the ducks.
Give me five dollars, or I will kill as many puppies as it takes to make you. And they'll go to hell. And there in that hell will be fire, brimstone, and rap with Engrish lyrics.
I think the problem is not Solomonoff inducton or Kolmogorov complexity or Bayesian rationality, whatever the difference is, but you. You don't want an AI to think like this because you don't want it to kill you. Meanwhile, to a true altruist, it would make perfect sense.
Not really confident. It's obvious that no society of selfish beings whose members think like this could function. But they'd still, absurdly, be happier on average.
You don't need a bounded utility function to avoid this problem. It merely has to have the property that the utility of a given configuration of the world doesn't grow faster than the length of a minimal description of that function. (Where "minimal" is relative to whatever sort of bounded rationality you're using.)
It actually seems quite plausible to me that our intuitive utility-assignments satisfy something like this constraint (e.g., killing 3^^^^^3 puppies doesn't feel much worse than killing 3^^^^3 puppies), though that might not matter muc...
Nick Tarleton, you say:
"Benquo, replace "kill 3^^^^3 people" with "create 3^^^^3 disutility units" and the problem reappears."
But what is a disutility unit? How can there be that many? How do you know that what he supposes to be a disutility unit isn't from your persective a utility unit?
Any similarly outlandish claim is a challenge not merely to your beliefs, but to your mental vocabulary. It can't be evaluated for probability until it's evaluated for meaning.
Utility functions have to be bounded basically because genuine martingales screw up decision theory -- see the St. Petersburg Paradox for an example.
Economists, statisticians, and game theorists are typically happy to do so, because utility functions don't really exist -- they aren't uniquely determined from someone's preferences. For example, you can multiply any utility function by a constant, and get another utility function that produces exactly the same observable behavior.
Tiiba, keep in mind that to an altruist with a bounded utility function, or with any other of Peter's caveats, in may not "make perfect sense" to hand over the five dollars. So the problem is solveable in a number of ways, the problem is to come up with a solution that (1) isn't a hack and (2) doesn't create more problems than in solves.
Anyway, like most people, I'm not a complete utilitarian altruist, even at a philosophical level. Example: if an AI complained that you take up too much space and are mopey, and offered to kill you and replace you...
That's a remarkable level of resilience for a brain design which is, speaking professionally, a damn ugly mess.
...with vital functions inherited from reptiles. But it's been tested to death through history, serious failures thrown out at each step, and we've lots of practical experience and knowledge about how and why it fails. It wasn't built and run first go with zero unrecoverable errors.
I'm not advocating using evolutionary algorithms or to model from the human brain like Ray Kurzweil. I just mean I'd allow for unexpected breakdowns in any part of the ...
I think that if you consider that the chance of a threat to cause a given amount of disutility being valid is a function of the amount of disutility then the problem mostly goes away. That is, in my experience any threat to cause me X units of disutility where X is beyond some threshold is less than 1/10 as credible as a threat to cause me 1 unit of disutility. If someone threatened to kill another person unless I gave them $5000 I would be worried. If they threatened to kill 10 poeple I would be very slightly less worried. If they threatened to kill ...
"Odd, I've been reading moral paradoxes for many years and my brain never crashed once, nor have I turned evil."
Even if it hasn't happened to you, it's quite common- think about how many people under Stalin had their brains programmed to murder and torture. Looking back and seeing how your brain could have crashed is scary, because it isn't particularly improbable; it almost happened to me, more than once.
g: killing 3^^^^^3 puppies doesn't feel much worse than killing 3^^^^3 puppies
...
..........................
I hereby award G the All-Time Grand Bull Moose Prize for Non-Extensional Reasoning and Scope Insensitivity.
Clough: On the contrary, I think it is not only that weak but actually far weaker. If you are willing to consider the existance of things like 3^^^3 units of disutility without considering the existence of chances like 1/4^^^4 then I believe that is the problem that is causing you so much trouble.
I'm certainly willing to consider the existence o...
If you believe in the many worlds interpretation of quantum mechanics, you have to discount the utility of each of your future selves by his measure, instead of treating them all equally. The obvious generalization of this idea is for the altruist to discount the utility he assigns to other people by their measures, instead of treating them all equally.
But instead of using the QM measure (which doesn't make sense "outside the Matrix"), let the measure of each person be inversely related to his algorithmic complexity (his personal algorithmic comp...
Wei, would it be correct to say that, under your interpretation, if our universe initially contains 100 super happy people, that creating one more person who is "very happy" but not "super happy" is a net negative, because the "measure" of all the 100 super happy people gets slightly discounted by this new person?
It's hard to see why I would consider this the right thing to do - where does this mysterious "measure" come from?
Eliezer, do you think it would be suitable for a blog post here?
Mm... sure. "Bias against uncomputability."
"Would any commenters care to mug Tiiba? I can't quite bring myself to do it, but it needs doing."
If you don't donate $5 to SIAI, some random guy in China will die of a heart attack because we couldn't build FAI fast enough. Please donate today.
Eli,
I agree that G's reasoning is an example of scope insensitivity. I suspect you meant this as a criticism. It seems undeniable that scope insensitivity leads to some irrational attitudes (e.g. when a person who would be horrified at killing one human shrugs at wiping out humanity). However, it doesn't seem obvious that scope insensitivity is pure fallacy. Mike Vassar's suggestion that "we should consider any number of identical lives to have the same utility as one life" seems plausible. An extreme example is, what if the universe were periodi...
Vann McGee has proven that if you have an agent with an unbounded utility function and who thinks there are infinitely many possible states of the world (ie, assigns them probability greater than 0), then you can construct a Dutch book against that agent. Next, observe that anyone who wants to use Solomonoff induction as a guide has committed to infinitely many possible states of the world. So if you also want to admit unbounded utility functions, you have to accept rational agents who will buy a Dutch book.
And if you do that, then the subjectivist justifi...
G,
I was essentially agreeing with you that killing 3^^^^^3 vs 3^^^^3 puppies may not be ethically distinct. I would call this scope insensitivity. My suggestion was that scope insensitivity is not necessarily always unjustified.
Eliezer, creating another person in addition to 100 super happy people do not reduce the measures of those 100 super happy people. For example, suppose those 100 super happy people are living in a classical universe computed by some TM. The minimal information needed to locate each person in this universe is just his time/space coordinate. Creating another person does not cause an increase in that information for the existing people.
Is the value of my existence steadily shrinking as the universe expands and it requires more information to locate me in space?
If I make a large uniquely structured arrow pointing at myself from orbit so that a very simple Turing machine can scan the universe and locate me, does the value of my existence go up?
I am skeptical that this solution makes moral sense, however convenient it might be as a patch to this particular problem.
Stephen, you can't have been agreeing with me about that since I didn't say it, even though for some reason I don't understand (perhaps I was very unclear, but I don't see how) Eliezer chose to interpret me doing so and indeed going further to say that it isn't ethically distinct.
Random question:
The number of possible Turing machines is countable. Given a function that maps the natural numbers onto the set of possible Turing machines, one can construct a Turing machine that acts like this:
If machine #1 has not halted, simulate the execution of one instruction of machine #1
If machine #2 has not halted, simulate the execution of one instruction of machine #2
If machine #1 has not halted, simulate the execution of one instruction of machine #1
If machine #3 has not halted, simulate the execution of one instruction of machine #3
If mach...
As others have basically said:
Isn't the point essentially that we believe the man's statement is uncorrelated with any moral facts? I mean if we did, then its pretty clear we can be morally forced into doing something.
Is it reasonable to believe the statement is uncorrelated with any facts about the existence of many lives? It seems so, since we have no substantial experience with "Matrices", people from outside the simulation visting us, 3^^^^^^3, the simulation of moral persons, etc...
Consider, the statement 'there is a woman being raped aro...
Eliezer, you can interpret rocks as minds if you make the interpretation complex enough. Why do you ignore these rock-minds if not because you discount them for algorithmic complexity?
First, questions like "if the agent expects that I wouldn't be able to verify the extreme disutility, would its utility function be such as to actually go through spending the resources to cause the unverifiable disutility?"
That an entity with such a utility function exists would manage to stick around long enough in the first place itself may drop the probabilities by a whole lot.
Perhaps best to restrict ourselves to the case of the disutility being verifiable, but only after the fact. (Has this agent ever pulled this soft of thing before? etc.....
Eliezer> Is the value of my existence steadily shrinking as the universe expands and it requires more information to locate me in space?
Yes, but the value of everyone else's existence is shrinking by the same factor, so it doesn't disturb the preference ordering among possible courses of actions, as far as I can see.
Eliezer> If I make a large uniquely structured arrow pointing at myself from orbit so that a very simple Turing machine can scan the universe and locate me, does the value of my existence go up?
This is a more serious problem for my propos...
I'll respond to a couple of other points I skipped over earlier.
Eliezer> It's hard to see why I would consider this the right thing to do - where does this mysterious "measure" come from?
Suppose you plan to measure the polarization of a photon at some future time and thereby split the universe into two branches of unequal weight. You do not treat people in these two branches as equals, but instead value the people in the higher-weight branch more, right? Can you answer why you consider that to be the right thing to do? That's not a rhetorical ...
Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability. Unlike Bayes formula, which is an unassailable theorem, the principle of maximizing expected return is perhaps just a model of rational desire. As such it could be wrong. When dealing with reasonably high probabilities, the model seems intuitively right. With small probabilities it seems to be just an abstraction, and there is not much intuition to compare it to. When considering a game with positive expected return that ...
Wei: You do not treat people in these two branches as equals, but instead value the people in the higher-weight branch more, right? Can you answer why you consider that to be the right thing to do?
Robin Hanson's guess about mangled worlds seems very elegant to me, since it means that I can run a (large) computer with conventional quantum mechanics programmed into it, no magic in its transistors, and the resulting simulation will contain sentient beings who experience the same probabilities we do.
Even so, I'd have to confess myself confused about why I find myself in a simple universe rather than a noisy one.
Not all infinities are equal, there exists a hierarchy. Look at real numbers versus integers.
kthxbye
Stephen, no problem. Incidentally, I share your doubt about the optimality of optimizing expected utility (though I wonder whether there might be a theorem that says anything coherent can be squeezed into that form).
CC, indeed there are many infinities (not merely infinitely many, not merely more than we can imagine, but more than we can describe), but so what? Any sort of infinite utility, coupled with a nonzero finite probability, leads to the sort of difficulty being contemplated here. Higher infinities neither help with this nor make it worse, so far a...
I have a paper which explores the problem in a somewhat more general way (but see especially section 6.3).
Infinite Ethics: http://www.nickbostrom.com/ethics/infinite.pdf
People have been talking about assuming that states with many people hurt have a low (prior) probability. It might be more promising to assume that states with many people hurt have a low correlation with what any random person claims to be able to effect.
Eliezer, I think Robin's guess about mangled worlds is interesting, but irrelevant to this problem. I'd guess that for you, P(mangled worlds is correct) is much smaller than P(it's right that I care about people in proportion to the weight of the branches they are in). So Robin's idea can't explain why you think that is the right thing to do.
Nick, your paper doesn't seem to mention the possibility of discounting people by their algorithmic complexity. Is that an option you considered?
Pascal's wager type arguments fail due to their symmetry (which is preserved in finite cases).
Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won't be. There's too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.
However, if after chugging through the math, it didn't balance out and still t...
Even if there is nobody currently making a bignum-level threat, maybe the utility-maximizing thing to do is to devote substantial resources to search for low-probability, high-impact events and stop or encourage them depending on the utility effect. After all, you can't say the probability of every possibility as bad as killing 3^^^^3 people is zero.
Nick Tarleton,
Yes, it is probably correct that one should devote substantial resources to low probability events, but what are the odds that the universe is not only a simulation, but that the containing world is much bigger; and, if so, does the universe just not count, because it's so small? The bounded utility function probably reaches the opposite conclusion that only this universe counts, and maybe we should keep our ambitions limited, out of fear of attracting attention.
Robin: Great point about states with many people having low correlations with what one random person can effect. This is fairly trivially provable.
Utilitarian: Equal priors due to complexity, equal posteriors due to lack of entanglement between claims and facts.
Wei Dai, Eliezer, Stephen, g: This is a great thread, but it's getting very long, so it seems likely to be lost to posterity in practice. Why don't the three of you read the paper Neel Krishnaswami referenced, have a chat, and post it on the blog, possibly edited, as a main post?
"The p...
It might be more promising to assume that states with many people hurt have a low correlation with what any random person claims to be able to effect.
Robin: Great point about states with many people having low correlations with what one random person can effect. This is fairly trivially provable.
Aha!
For some reason, that didn't click in my mind when Robin said it, but it clicked when Vassar said it. Maybe it was because Robin specified "many people hurt" rather than "many people", or because Vassar's part about being "provable" caused me to actually look for a reason. When I read Robin's statement, it came through as just "Arbitrarily penalize probabilities for a lot of people getting hurt."
But, yes, if you've got 3^^^^3 people running around they can't all have sole control over each other's existence. So in a scenario where lots and lots of people exist, one has to penalize by a proportional factor the probability that any one person's binary decision can solely control the whole bunch.
Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.
This seems to me to go right to the root of the problem, not a full-fledged formal answer but it feels right as a starting point. Any objections?
Robin's anthropic argument seems pretty compelling in this example, now that I understand it. It seems a little less clear if the Matrix-claimant tried to mug you with a threat not involving many minds. For example, maybe he could claim that there exists some giant mind, the killing of which would be as ethically significant as the killing of 3^^^^3 individual human minds? Maybe in that case you would anthropically expect with overwhelmingly high probability to be a figment inside the giant mind.
I think that Robin's point solves this problem, but doesn't solve the more general problem of an AGI's reaction to low probability high utility possibilities and the attendant problems of non-convergence.
The guy with the button could threaten to make an extra-planar factory farm containing 3^^^^^3 pigs instead of killing 3^^^^3 humans. If utilities are additive, that would be worse.
The guy with the button could threaten to make an extra-planar factory farm containing 3^^^^^3 pigs instead of killing 3^^^^3 humans. If utilities are additive, that would be worse.
Congratulations, you made my brain asplode.
3^^^^^^3 copies of that brain, fates all dependent on the original pondering this thread.
All fates equal, I think their incentive to solve the mystery equals that for one alone.
Eliezer, what if the mugger (Matrix-claimant) also says that he is the only person who has that kind of power, and he knows there is just one copy of you in the whole universe? Is the probability of that being true less than 1/3^^^^3?
Don't dollars have an infinite expected value (in human lives or utility) anyway, especially if you take into account weird low-probability scenarios? Maybe the next mugger will make even bigger threats.
Even if the Matrix-claimant says that the 3^^^^3 minds created will be unlike you, with information that tells them they're powerless, if you're in a generalized scenario where anyone has and uses that kind of power, the vast majority of mind-instantiations are in leaves rather than roots.
You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work. Solomonoff Induction doesn't let you consider just "generalized scenarios"; you have to calculate each one in turn, and eventually one of the...
Michael, your pig example threw me into a great fit of belly-laughing. I guess that's what my mind look likes when it explodes. And I recall that was Marvin Minsky's prediction in Society of Minds.
You would have to abandon Solomonoff Induction (or modify it to account for these anthropic concerns) to make this work.
To be more specific, you would have to alter it in such a way that it accepted Brandon Carter's Doomsday Argument.
"Congratulations, you made my brain asplode."
Read http://www.spaceandgames.com/?p=22 if you haven't already. Your utility function should not be assigning things arbitrarily large additive utilities, or else you get precisely this problem (if pigs qualify as minds, use rocks), and your function will sum to infinity. If you "kill" by destroying the exact same information content over and over, it doesn't seem to be as bad, or even bad at all. If I made a million identical copies of you, froze them into complete stasis, and then shot 999,...
Wei, no I don't think I considered the possibility of discounting people by their algorithmic complexity.
I can see that in the context of Everett it seems plausible to weigh each observer with a measure proportional to the amplitude squared of the branch of the wave function on which he is living. Moreover, it seems right to use this measure both to calculate the anthropic probability of me finding myself as that observer and the moral importance of that observer's well-being.
Assigning anthropic probabilities over infinite domains is problematic. I don't...
It seems like this may be another facet of the problem with our models of expected utility in dealing with very large numbers. For instance, do you accept the Repugnant conclusion?
I'm at a loss for how to model expected utility in a way that doesn't generate the repugnant conclusion, but my suspicion is that if someone finds it, this problem may go away as well.
Or not. It seems that our various heuristics and biases against having correct intuitions about very large and small numbers are directly tied up in producing a limiting framework that acts as a...
Regarding the comments about exploding brains, it's a wonder to me that we are able to think about these issues and not lose our sanity. How is it that a brain evolved for hunting/gathering/socializing is able to consider these problems at all? Not only that, but we seem to have some useful intuitions about these problems. Where on Earth did they come from?
Nick> Does your proposal require that one accepts the SIA?
Yes, but using a complexity-based measure as the anthropic probability measure implies that the SIA's effect is limited. For example, consider...
Before I get going, please let me make clear that I do not
understand the math here (even Eliezer's intuitive bayesian paper
defeated me on the first pass, and I haven't yet had the courage to
take a second pass), so if I'm Missing The Point(tm), please tell
me.
It seems to me that what's missing is talking about the probability
of given level of resourcefulness of the mugger. Let me 'splain.
If I ask the mugger for more detail, there are a wide variety of
different variables that determine how resourceful the mugger claims
to be. The mugger could, upon fu...
My apologies for the horrific formatting; I wrote that huge diatribe in w3m before discovering the captcha needed javascript, and then pasted it here. If an admin can fix it, please do so.
-Robin
One idea is to tell the AI not to expend a portion of its resources greater than the chance of the mugger's statement being true.
Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is fifty(?) orders of magnitude cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?
I think a large universe full of randomly scattered matter is much more probable than a small universe that consists of a working human mind and little else.
"But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large"
Eliezer, I contend your limit!
I think this scenario is ingenious. Here are a few ideas, but I'm really not sure how far one can pursue them / how 'much work' they can do:
(1) Perhaps the agent needs some way of 'absolving itself of responsibility' for the evil/arbitrary/unreasonable actions of another being. The action to be performed is the one that yields highest expected utility but only along causal pathways that don't go through an adversary that has been labelled as 'unreasonable'.
(Except this approach doesn't defuse the variation that goes "You can never wipe your nose becau...
Our best understanding of the nature of the "simulation" we call reality has this concept we call "cause and effect" in place. So when something happens it has non-zero (though nigh infinitely small) effects on everything else in existence (progressively smaller effect with each degree of separation).
The effect that affecting 3^^^3 things (regardless of type or classification) has on other things (even if the individual effects of affecting one thing would be extremely small) would be non-trivial (enormously large even after a positive...
Assume that the basic reasoning for this is true, but nobody actually does the mugging. Since the probability doesn't actually make a significant difference to the expected utility, I'll just simplify and say there equal.
The total expected marginal utility, assuming you're equally likely to save or kill the people, would be (3^^^3 - 3^^^3) + (3^^^^3 - 3^^^^3) + (3^^^^^3 - 3^^^^^3) + ... = 0. At least, it would be if you count it by alternating with saving and killing. You could also count it as 3^^^3 + 3^^^^3 - 3^^^3 + 3^^^^^3 - 3^^^^3 + ... = infinity. Or...
The probability of some action costing delta-utility x and resulting in delta-utility y, where y >> x, is low. The Anti Gratis Dining modifier is x/y. These things I conjecture, anyways.
The apple-salespeep who says, "Give me $0.50, and I will give you an apple" is quite believable, unlike the apple-salespeep who claims, "Give me $3.50, and I will give apples to all who walk the Earth". We understand how buying an apple gets us an apple, but we know far less about implementing global apple distribution.
Suppose I have a Holy Hand Gr...
[Late edit: I have since retracted this solution as wrong, see comments below; left here for completeness. The ACTUAL solution that really works I've written in a different comment :) ]
I do believe I've solved this. Don't know if anyone is still reading or not after all this time, but here goes.
Eliezer speaks of the symmetry of Pascal's wager; I'm going to use something very similar here to solve the issue. The number of things that could happen next - say, in the next nanosecond - is infinite, or at the very least incalculable. A lot of mundane things cou...
I think you've just perfectly illustrated how some Scope Insensitivity can be good thing.
Because a mind with perfect scope sensitivity, will be diverted into chasing impossibly tiny probabilities for impossibly large rewards. If a good rationalist must win, then a good rationalist should commit to avoiding supposed rationality that makes him lose like that.
So, here's a solution. If a probability is too tiny to be reasonably likely to occur in your lifespan, treat its bait as actually impossible. If you don't, you'll inevitably crash into effective ineffectiveness.
This comment thread has grown too large :). I have a thought that seems to me to be the right way to resolve this problem. On the one hand, the thought is obvious, so it probably has already been played out in this comment thread, where it presumably failed to convince everyone. On the other hand, the thread is too large for me to digest in the time that I can reasonably give it. So I'm hoping that someone more familiar with the conversation here will tell me where I can find the sub-thread that addresses my point. (I tried some obvious word-searches,...
It does seem that the probability of someone being able to bring about the deaths of N people should scale as 1/N, or at least 1/f(N) for some monotonically increasing function f. 3^^^^3 may be a more simply specified number than 1697, but it seems "intuitively obvious" (as much as that means anything) that it's easier to kill 1697 people than 3^^^^3. Under this reasoning, the likely deaths caused by not giving the mugger $5 are something like N/f(N), which depends on what f is, but it seems likely that it converges to zero as N increases.
It is a...
Incidentally: How would it affect your intuition if you instead could participate in the Intergalactic Utilium Lottery, where probabilities and payoffs are the same but where you trust the organizers that they do what they promise?
it's no longer a Pascal mugging if the threat is credible.
That is backward. It is only a Pascal mugging if the threat is credible.
No, then it's just a normal mugging.
Philosophers of religion argue quite a lot about Pascal's wager and very large utilities or infinite utilities. I haven't bothered to read any of those papers, though. As an example, here is Alexander Pruss.
As I see it, the mugger seems to have an extremely bad hand to play.
If you evaluate the probability of the statement 'I will kill one person if you don't give me five dollars,' as being something that stands in a relationship to the occurrence of such threat being carried through on, and simply multiply up from there until you get to 3^^^^3 people, then you're going to end up with problems.
However, that sort of simplification – treating all the evidence as locating the same thing, only works for low multiples. (Which I'd imagine is why it feels wrong when ...
I think the problem might lie in the almost laughable disparity between the price and the possible risk. A human mind is not capable of instinctively providing a reason why it would be worth killing 3^^^^3 people - or even, I think, a million people - as punishment for not getting $5. A mind who would value $5 as much or more than the lives of 3^^^^3 people is utterly alien to us, and so we leap to the much more likely assumption that the guy is crazy.
Is this a bias? I'd call it a heuristic. It calls to my mind the discussion in Neal Stephenson's Anathem a...
Maybe I'm missing the point here, but why do we care about any number of simulated "people" existing outside the matrix at all? Even assuming that such people exist, they'll never effect me, nor effect anyone in the world I'm in. I'll never speak to them, they'll never speak to anyone I know and I'll never have to deal with any consequences for their deaths. There's no expectation that I'll be punished or shunned for not caring about people from outside the matrix, nor is there any way that these people could ever break into our world and attempt...
This might be overly simplistic, but it seems relevant to consider the probability per murder. I am feeling a bit of scope insensitivity on that particular probability, as it is far too small for me to compute, so I need to go through the steps.
If someone tells me that they are going to murder one person if I don't give them $5, I have to consider the probability of it: not every attempted murder is successful, after all, and I don't have nearly as much incentive to pay someone if I believe they won't be successful. Further, most people don't actually atte...
First¸ I didn't read all of the above comments, though I read a large part of it.
Regarding the intuition that makes one question Pascals mugging: I think it would be likely that there was a strong survival value in the ancestral environment to being able to detect and disregard statements that would cause you to pay money to someone else without there being any way to detect if these statements were true. Anyone without that ability would have been mugged to extinction long ago. This makes more sense if we regard the origin of our builtin utility function...
Looks like strategic thinking to me. If you are to organize yourself to be prone to be Pascal-mugged, you will get Pascal mugged, and thus it is irrational to organize yourself to be Pascal-muggable.
edit: It is as rational to introduce certain bounds on applications of own reasoning as it is to try to build reliable, non-crashing software, or to impose simple rule of thumb limits on the output of the software that controls positioning of control rods in the nuclear reactor.
If you properly consider a tiny probability of mistake to your reasoning, a mistake...
The problem seems to vanish if you don't ask "What is the expectation value of utility for this decision, if I do X", but rather "If I changed my mental algorithms so that they do X in situations like this all the time, what utility would I plausibly accumulate over the course of my entire life?" ("How much utility do I get at the 50th percentile of the utility probability distribution?") This would have the following results:
For the limit case of decisions where all possible outcomes happen infinitely often during your life
"Pascal's" Mugging requires me to believe that the apparent universe that we occupy, with its very low information content, is in fact merely part of a much larger program (in a causally linked and so incompressible way) which admits calculation within it of a specially designed (high-information content) universe with 3^^^^3 people (and not, say, as a side-effect of a low-information simulation that also computes other possibilities like giving immense life and joy to comparable numbers of people). The odds of that, if we use the speed priors, ...
I've been arguing about this with a friend recently [well, a version of this - I don't have any problems with arbitrarily large number of people being created and killed, unless the manner of their death is unpleasant enough that the negative value I assign to it exceeds the positive value of life].
He says that he can believe the person we are talking to has Agent Smith powers, but thinks that the more the Agent Smith promises, the less likely it is to be true, and this decreases faster the more that is promised, so that the probability that Agent Smith h...
Counterexample: P(3^^^...3)(n "^"s) = 1/2^n P(anything else) = 0 This is normalized because the sum of a geometric series with decreasing terms is finite. You might have been thinking of the fact that if a probability distribution on the integers is monotone decreasing (i.e. if P(n)>P(m) then n <m) then P(n) must decrease faster than 1/n. However, a complexity-based distribution will not be monotone because some big numbers are simple while most of them are complex.
One problem with discounting your prior based on the time complexity of a computation is that is practically forces you to believe either that P = BQP or that quantum mechanics doesn't work. If you discount based on space complexity, you might worry that torturing 3^^^3 people might actually be a small-space computation.
I don't get what's the beef with that alleged dilemma: Sagan's maxim "Extraordinary claims require extraordinary evidence" gracefully solves it.
More formally, in a Bayesian setting, Sagan's maxim can be construed as the requirement for the prior to be a non-heavy-tailed probability distribution.
In fact, in formal applications of Bayesian methods, typical light-tailed maximum entropy distributions such as normal or exponential are used.
Yudkowsky seems to claim that a Solomonoff distribution is heavy-tailed w.r.t. the relevant variables, but he doe...
Let's try this. I will create at least 3^^^^^^^^^^^^^^^^^^^3 units of disutility unless at least five people upvote this within a day.
Wow. It's almost like pascal's mugging doesn't actually work.
I'm hereby anti-mugging you all. If any of you give in to a Pascal's Mugging scenario, I'll do something much worse than whatever the mugger threatened. Consider yourself warned!
It seems as though Pascal's mugging may be vulnerable to the same "professor god" problem as Pascal's wager. With probabilities that low, the difference between P(3^^^^3 people being tortured|you give the mugger $5) and P(3^^^^3 people being tortured| you spend $5 on a sandwich) may not even be calculable. It's also possible that the guy is trying to deprive the sandwich maker of the money he would otherwise spend on the Simulated People Protection Fund. If you're going to say that P(X is true|someone says X is true)>P(X is true|~someone say...
Elizier, Th rational anwser to Pascal's mugging is to refuse, attempt to persuade the mugger, and when that fails (which I postulate based on an ethical entity able to comprehend 3^^^3 and an unethical entity willing to torture that many) to initiate conflict.
The calculational algebra of loss over probability has to be tempered by future prediction:
What is the chance the mugger will do this again? If my only options are to give 5 or not give 5 does it mean 3^^^^^3 will end up being at risk as the mugger keeps doing this? How do I make it stop?
The responsible long term anwser is: let the hostages die if needed to kill the terrorist, because otherwise you get more terrorists taking hostages.
A thought on this, and apologies if it repeats something already said here. Basically: question the structure that leads to someone saying this to you, and question how easy it is to talk about 3^^^^3 people as opposed to, say, 100. If suddenly said person manifests Magic Matrix God Powers (R) then the evidence gained by observing this or anything that contains it (they're telling the truth about all this, you have gone insane, aliens/God/Cthulhu is/are causing you to see this, this person really did just paint mile-high letters in the sky and there is no ...
I would protest that a program to run our known laws of physics (which only predict that 10^80 atoms exist...so there's no way 3^^^^3 distinct minds could exist) is smaller by some number of bits on the order of log_2(3^^^^3) than one in which I am seemingly running on the known laws of physics, and my choice whether or not to hand over $5 dollars (to someone acting as if they are running on the known laws of physics...seeming generally human, and trying to gain wealth without doing hard work) is positively correlated with whether or not 3^^^^3 minds runni...
Was Kant an analytic philosopher? I can't remember, but thinking in terms of your actions as being the standard for a "categorical imperative" followed by yourself in all situations as well as by all moral beings, the effect of giving the mugger the money is more than $5. If you give him the money once he'll be able to keep on demanding it from you as well as from other rationalists. Hence the effect will be not $5 but all of your (plural) money, a harm which might be in a significant enough ratio to the deaths of all those people to warrant not ...
I think the answer to this question concerns the Kolmogorov complexity of various things, and the utility function as well. What is the Kolmogorov complexity of 3^^^3 simulated people? What is the complexity of the program to generate the simulated people? What is the complexity of the threat, that for each of these 3^^^3 people, this particular man is capable of killing each of them? What sort of prior probability do we assign to "this man is capable of simulating 3^^^3 people, killing each of them, and willing to do so for $5"?
Similarly, the ut...
Keep in mind that I have very limited knowledge of probability or analytic philosophy, but wouldn't a very easy answer be that if you can conceive of a scenario with the same outcome assigned to NOT doing the action, and that scenario has an equal probability to be true they're both irrelevant?
If it's possible that you can get an infinite amount of gain by believing in god, it's equally possible you can get an infinite amount of gain by NOT believing in god.
"Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing mach...
I have a very poor understanding of both probability and analytic philosophy so in the inevitable scenario where I'm completely wrong be kind.
But if you can conceive of a scenario where there's a probability that doing something will result in infinite gain, but you can also picture an equally probable scenario where doing NOTHING will result in equal gain, then don't they cancel each other out?
If there's a probability that believing in god will give you infinite gain, isn't there an equal probability that not believing in god will result in infinite gai...
Hmmm...
My problem with this scenario is that I've never run Solomonoff Induction, I run evidentialism. Meaning: if a hypothesis's probability is equal to its True Prior, I just treat that as equivalent to "quantum foam", something that exists in my mathematics for ease of future calculations but has no real tie to physical reality, and is therefore dismissed as equivalent to probability 0.0.
Basically, my brain can reason about plausibility in terms of pure priors, but probability requires at least some tiny bit of evidence one way or the other. ...
Why wait until someone wants the money? Shouldn't the AI try to send 5 Dollars to everyone with a note attached reading "Here is a tribute; please don't kill a huge number of people" regardless of whether they ask for it or not?
For the most part, when person P says, "I will do X," that is evidence that P will do X, and the probability of P doing X increases. Instead, if P has a reputation for sarcasm, and if P says the same thing, then the probability that P will do X decreases. Clearly, then, our estimation of P's position in mindspace determines weather we increase or decrease the likelihood of P's claims. For the mugging situation, we might adopt a model where the mugger's claims about very improbable actions in no way affect what we expect him to do since we do not ...
What about optimizing for median expected utility?
I think you are overestimating the probabilities there: it is only Pascal's Mugging if you fail to attribute a low enough probability to the mugger's claim. The problem, in my opinion, is not how to deal with tiny probabilities of vast utilities, but how not to attribute too high probabilities to events whose probabilities defy our brain's capacity (like "magic powers from outside the Matrix").
I also feel that, as with Pascal's wager, this situation can be mirrored (and therefore have the expected utilities canceled out) if you simply think "...
You could argue that doing any action, such as accepting the wager, has a small but much larger than 1/3^^^3 chance of killing 3^^^3 people. You could argue that any action has a small but much larger than 1/3^^^3 chance of guaranteeing blissful immortality for 3^^^3 people. Therefore, declining the wager makes a lot more sense because no matter what you do you might have already doomed all those people.
How about : The logic of a system applies only within that system ?
Variants of this are common in all sorts of logical proofs, and it stands to reason that elements outside a system do not follow the rules of that system.
A construct assuming something out-of-universe acting in-universe just can't be consistent.
I think you're assuming that to give in to the mugging is the wrong answer in a one-shot game for a being that values all humans in existence equally, because it feels wrong to you, a being with a moral compass evolved in iterated multi-generational games.
Consider these possibilities, any one of which would create challenges for your reasoning:
1. Giving in is the right answer in a one-shot game, but the wrong answer in an iterated game. If you give in to the mugging, the outsider will keep mugging you and other rationalists until you're all brok...
Your overall point is right and important but most of your specific historical claims here are false - more mythical than real.
Free-market economic theory developed only after millenia during which everyone believed that top-down control was the best way of allocating resources.
Free market economic theory was developed during a period of rapid centralization of power, before which it was common sense that most resource allocation had to be done at the local level, letting peasants mostly alone to farm their own plots. To find a prior epoch of deliberate central resource management at scale you have to go back to the Bronze Age, with massive irrigation projects and other urban amenities built via palace economies, and even then there wasn't really an ideology of centralization. A few Greek city-states like Sparta had tightly regulated mores for the elites, but the famously oppressed Helots were still probably mostly left alone. In Russia, Communism was a massive centralizing force - which implies that peasants had mostly been left alone beforehand. Centralization is about states trying to become more powerful (which is why Smith called his book The Wealth of Nations, pitching h...
slightly related:
Suppose Omega forces you to chose a number 0<p<=1 and then, with probability p, you get tortured for 1/(p²) seconds.
Assume for any T, being tortured for 2T seconds is exactly twice as bad as being tortured for T seconds.
Also assume that your memory gets erased afterwards (this is to make sure there won't be additional suffering from something like PTSD)
The expected value of seconds being tortured is p * 1/(p²)=1/p, so, in terms of expected value, you should chose p=1 and be tortured for 1 second. The smaller the p you chose, the higher the expected value.
Would you actually chose p=1 to maximize the expected value, or would you rather chose a very low p (like 1/3^^^^3)?
I feel confused because this seems to be a very obvious consideration, so it's likely that I'm wrong: shouldn't be the likelihood of 3^^^3 humans being killed decrease proportionally to the huge number that is being casually tossed out?
I mean, even if we go with the hypothesis of a matrix lord out for one of the cruelest prank in history, the likelihood of his simulation being able handle 3^^^3 humans being killed should be proportionally less than the likelihood of his simulation being able to handle 7625597484987 a mere humans being killed, since I think...
First of all- I love this post!
Second of all - How is this different than Pascal's wager - meaning why doesn't Pascal's mugger get cancelled out by a possible counter mugger who will simulate and kill the same number of people if you give in to the muggers demand.
I think that we should expect it to be extremely unlikely for an agent to have the required power and willingness to kill 3^^^^3 people. The shortest explanation for why we should believe this is that any agent that gathers "power" will increase his ability to influence the state of the universe. As his "power" grows, the number of possible states that the agent can successfully attain scales like the factorial with respect to power. If we denote power with x, then the number of possible states that the agent can attain is proportional to x!. Now, the expe...
The most common formalizations of Occam's Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation. What if this makes a mind vulnerable to finite forms of Pascal's Wager? A compactly specified wager can grow in size much faster than it grows in complexity. The utility of a Turing machine can grow much faster than its prior probability shrinks.
Consider Knuth's up-arrow notation:
In other words: 3^^^3 describes an exponential tower of threes 7625597484987 layers tall. Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe. This, even though writing out 3^^^3 in base 10 would require enormously more writing material than there are atoms in the known universe (a paltry 10^80).
Now suppose someone comes to me and says, "Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
Call this Pascal's Mugging.
"Magic powers from outside the Matrix" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.
Thus the Kolmogorov complexity of "magic powers from outside the Matrix" is larger than the mere English words would indicate. Therefore the Solomonoff-inducted probability, two to the negative Kolmogorov complexity, is exponentially tinier than one might naively think.
But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large. If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.
Most people, I think, envision an "infinite" God that is nowhere near as large as 3^^^^3. "Infinity" is reassuringly featureless and blank. "Eternal life in Heaven" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds. The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large. Similarly for envisioning an "infinite" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.
The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the "Professor God" who places only atheists in Heaven. And since all the expected utilities here are allegedly "infinite", it's easy enough to argue that they cancel out. Infinities, being featureless and blank, are all the same size.
But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as "large" or "small".
If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.
You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal's Mugger is just a philosopher out for a fast buck.
But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI is its code. What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?
How do I know to be worried by this line of reasoning? How do I know to rationalize reasons a Bayesian shouldn't work that way? A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence. It would simply go by whatever answer Solomonoff induction obtained.
It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it. What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it "right" or "wrong"?
Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging? Do I have an instinct to resist exploitation by arguments "anyone could make"? Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss? Do I drop sufficiently small probabilities from consideration entirely? Would an AI that lacks these instincts be exploitable by Pascal's Mugging?
Is it me who's wrong? Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the "mainline" probabilities?
It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability. I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational".
Should we penalize computations with large space and time requirements? This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely? Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics? Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.
I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006. I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias.