Sewing-Machine comments on Open Thread, August 2010-- part 2 - Less Wrong

3 Post author: NancyLebovitz 09 August 2010 11:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread.

Comment author: [deleted] 26 August 2010 05:39:54AM 2 points [-]

I just read and liked "Pascal's mugging." It was written a few years ago, and the wiki is pretty spare. What's the state of the art on this problem?

Comment author: gwern 26 August 2010 11:10:54AM *  3 points [-]

I haven't seen much response to it. There's a reply in Analysis by Baumann who takes a cheap out by saying simply that one cannot provide the probability in advance, that it's 'extremely implausible'.

I have an unfinished essay where I argue that as presented the problem is asking for a uniform distribution over an infinity, so you cannot give the probability in advance, but I haven't yet come up with a convincing argument why you would want your probability to scale down in proportion as the mugger's offer scales up.

That is: it's easy to show that scaling disproportionately leads to another mugging. If you scale superlinearly, then the mugging can be broken up into an ensemble of offers that add to a mugging. If you scale sublinearly, you will refuse sensible offers that are broken up.

But I haven't come up with any deeper justification for linearly scaling other than 'this apparently arbitrary numeric procedure avoids 3 problems'. I've sort of given up on it, as you can see from the parlous state of my essay.

Comment author: [deleted] 26 August 2010 03:28:28PM 0 points [-]

Thanks. Here's my fresh and uneducated opinion.

I see three kinds of answers to the mugging:

  1. We're boned
  2. Some kind of hack in the decision-making process
  3. Some kind of hack in the mathematics
  4. "Head on." That is, prove that the expected disutility of a given threat is bounded independent of the size of the threat.

Here's my analysis in the sense of 4., tell me if I'm making a common mistake. We are worried that P(agent can do H amount of harm | agent threatens to do H amount of harm) times H can be arbitrarily large. As Tarleton pointed out in the 2007 post, any details beyond H about the scenario we're being threatened with is a distraction (right? That actually doesn't seem to be the implicit assumption of your draft, or of Hanson's comment, etc.)

By Bayes the quantity in question is the same as

P(threat | ability)/P(threat) x P(ability) x H

Our hope is that we can prove this quantity is actually bounded independent of H (but of course not independent of the agent making the threat). I'll leave aside the fact that the probability that such a proof contains a mistake is certainly bounded below.

P(threaten H) is the probability that a certain computer program (the agent making the threat) will give a certain output (the threat). My feeling about this number is that it is medium sized if H has low complexity (such as 3^^^3) and tiny if H has high complexity (such as some of the numbers within 10% of 3^^^3). That is, complex threats have more credibility. I'm comforted by the fact that, by the definition of complexity, it would take a long time for an agent to articulate his complex threat. So let's assume P(threaten H) is medium-sized, as in the original version where H = 3^^^3 x value of human not being tortured.

It seems like wishful thinking that P(threat | ability) should shrink with H. Let's assume this is also medium sized and does not depend on H.

So I think the question boils down to how fast P(agent can do H amount of harm) shrinks with H. If it's O(1/H) we're OK, and if it's larger we're boned.

Comment author: Pavitra 27 August 2010 05:48:45AM 2 points [-]

As long as we're all chipping in, here's my take:

(1) Even if the correct answer is to hand over the money, we should expect to feel an intuitive sense that doing so is the wrong answer. A credible threat to inflict that much disutility would never have happened in the ancestral environment, but false threats to do so have happened rather often. That being the case, the following is probably rationalization rather than rationality:

(2) Consider the proposition that, at some point in my life, someone will try to Pascal's-mug me and actually back their threats up. In this case, I would still expect to receive a much larger number of false threats over the course of my lifetime. If I hand over all my money to the first mugger without proper verification, I won't be able to pay up when the real threat comes around.

Comment author: [deleted] 27 August 2010 06:16:02AM 0 points [-]

I think that your (2) is a proof that handing over the money is the wrong answer. My understand is that the problem is whether this means that any AI that runs on the basic package that we sometimes envision hazily -- prior, (unbounded) utility function, algorithm for choosing based somehow on multiplying the former by the latter -- is boned.

Comment author: Pavitra 27 August 2010 06:44:23AM 1 point [-]

I thought that my (2) was a proof that a prior-and-utility system will correctly decide to investigate the claim to see whether it's credible.

Comment author: [deleted] 27 August 2010 08:15:46PM 1 point [-]

But what a prior-and-utility system means by "credible" is that the expected disutility is large. If a blackmailer can, at finite cost to itself, put our AI in a situation with arbitrarily high expected disutility, then our AI is boned.

Comment author: Pavitra 27 August 2010 08:25:51PM 0 points [-]

Ah, you're worried about a blackmailer that can actually follow up on that threat. I would point out that humans usually pay ransoms, so it's not exactly making a different decision than we would in the same situation.

Or, the AI might anticipate the problem and self-modify in advance to never submit to threats.

Comment author: [deleted] 27 August 2010 08:37:38PM 0 points [-]

I'm worried about a blackmailer that can with positive probability follow up on that threat.

Yes humans behave in the same way, at least according to economists. We pay ransoms when the probability of the threat being carried out, times the disutility that would result from the threat being carried out, is less than the ransom. The difference is that for human-scale threats, this expected disutility does seem to be bounded.

The AI might anticipate the problem and self-modify to never submit to threats

That could mean one of at least two things: either the AI starts to work according to the rules of a (hitherto not conceived?) non-prior-and-utility system. Or the AI calibrates its prior and its utility function so that it doesn't submit to (some) threats. I think the question is whether something like the second idea can work.

Comment author: Pavitra 27 August 2010 08:51:16PM -1 points [-]

No, see, that's different.

If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.

Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun. Do you now assign a higher expected-value utility to giving $100 to the EFF, or to giving the same $100 instead to SIAI? If I blew up the moon as a warning shot, would that change your mind?

Comment author: gwern 27 August 2010 04:57:09AM 1 point [-]

That is, complex threats have more credibility.

I don't quite follow this. Assuming we're using one of the universal priors based on Turing machine enumerations, then an agent which consists of 3^^^3threat+noability is much shorter and much more likely than an agent which consists of ~.10*3^^^3threat+ability. The more complex the threat, the less space there is for executing it.

Comment author: [deleted] 27 August 2010 05:30:02AM *  0 points [-]

If I disagree, it's for a very minor reason, and with only a little confidence. (P(threat) is short for P(threat|no information about ability).) But you're saying the case for P(threaten H) being bounded below (and its reciprocal being bounded above) is even stronger than I thought, right?

Another way to argue that P(threaten H) should be medium-sized: at least in real life, muggings have a time-limit. There are finitely many threats of a hundred words or less, and so our prior probability that we will one day receive such a threat is bounded below.

Another way to argue that the real issue is P(ability H): our AI might single you out and compute P(gwern will do H harm) = P(gwern will do H harm | gwern can do H harm) x P(gwern can do H harm). It seems like you have an interest in convincing the AI that P(gwern can do H harm) x H is bounded above.

Comment author: gwern 28 August 2010 06:07:25PM 0 points [-]

While raking, I think I finally thought of a proof that the before-offer-probability can't be known.

The question is basically 'what fraction of all Turing machines making an offer (which is accepted) will then output a certain result?'

We could rewrite this as 'what is the probability that a random Turing machine will output a certain result?

We could then devise a rewriting of all those Turing machines into Turing machines that halt or not when their offer is accepted (eg. halting might = delivering, not halting = welshing on the deal. This is like Rice's theorem).

Now we are asking 'what fraction of all these Turing machines will halt?'

However, this is asking 'what is Chaitin's constant for this rewritten set of Turing machines?' and that is uncomputable!

Since Turing machine-based agents are a subset of all agents that might try to employ Pascal's Mugging (even if we won't grant that agents must be Turing machines), the probability is at least partially uncomputable. A decision procedure which entails uncomputability is unacceptable, so we reject giving the probability in advance, and so our probability must be contingent on the offer's details (like its payoff).

Thoughts?

Comment author: Wei_Dai 28 August 2010 08:27:36PM 4 points [-]

I think Nesov is right, you've basically (re)discovered that the universal prior is uncomputable and thought that this result is related to Pascal's Mugging because you made the discovery while thinking about Pascal's Mugging. Pascal's Mugging seems to be more about the utility function having to be bounded in some way.

You might be interested in this thread, where I talked about how a computable decision process might be able to use an uncomputable prior:

http://groups.google.com/group/one-logic/browse_frm/thread/b499a90ef9e5fd84/2193ca2c204a55d8?#2193ca2c204a55d8

Comment author: Vladimir_Nesov 28 August 2010 06:36:53PM 3 points [-]

It seems to be an argument against possibility of making any decision, and hence not a valid argument about this particular decision. Under the same assumptions, you could in principle formalize any situation in this way. (The problem boils down to uncomputability of universal prior itself.)

Besides, not making the decision is not an option, so you have to fall down to some default decision when you don't know how to choose, but where does this default come from?

Comment author: gwern 12 October 2010 08:29:58PM 0 points [-]

I take it as an argument against making perfect decisions. If perfection is uncomputable, then any computable agent is not perfect in some way.

The question is what imperfection do we want our agent to have? This might be the deep justification for choosing to scale probability by utility that I was looking for. Scaling linearly corresponds to being willing to lose a fixed amount to mugging, scaling superlinearly correspond to not willing to lose any genuine offer, scaling sublinearly corresponds to not being willing to ever be fooled. Or something like that. The details need some work.

Comment author: Perplexed 28 August 2010 06:21:50PM 2 points [-]

In order to make a decision, we do not always need an exact probability: sometimes just knowing that a probability is less than, say, 0.5 is enough to determine the correct decision. So, even though an exact probability p may be incomputable, that doesn't mean that the truth value of the statement "p<0.1" can not be computed (for some particular case). And that computation may be all we need.

That said, I'm not sure exactly how to interpret "A decision procedure which entails uncomputability is unacceptable." Unacceptable to whom? Do decision procedures have to be deterministic? To be algorithms? To be recursive? To be guaranteed to terminate in a finite time. To be guaranteed to terminate in a bounded time? To be guaranteed to terminate by the deadline for making a decision?

Comment author: gwern 28 August 2010 06:40:19PM 0 points [-]

In order to make a decision, we do not always need an exact probability: sometimes just knowing that a probability is less than, say, 0.5 is enough to determine the correct decision.

Alright, so you compute away and determine that the upper bound on Chaitin's constant for your needed formalism is 0.01. The mugger than multiplies his offering by 100, and proceeds to mug you, no? (After all, you don't know that the right probability isn't 0.01 and actually some smaller number.)

That said, I'm not sure exactly how to interpret "A decision procedure which entails uncomputability is unacceptable."

This is pretty intuitive to me - a decision procedure which cannot be computed cannot make decisions, and a decision procedure which cannot make decisions cannot do anything. I mean, do you have any reason to think that the optimal, correct, decision theory is uncomputable?

Comment author: Perplexed 28 August 2010 07:16:39PM 1 point [-]

I have no idea whether we are even talking about the same problem. (Probably not, since my thinking did not arise from raking). But you do seem to be suggesting that the multiplication by 100 does not alter the upper bound on the probability. As I read the wiki article on "Pascal's Mugging", Robin Hanson suggests that it does. Assuming, of course, that by "his offering" you mean the amount of disutility he threatens. And the multiplication by 100 does also affect the number (in this example 0.01) which I need to know whether p is less than. Which strikes me as the real point.

This whole subject seems bizarre to me. Are we assuming that this mugger has Omega-like psy powers? Why? If not, how does my upper bound calculation and its timing have an effect on his "offer"? I seem to have walked into the middle of a conversation with no way from the context to guess what went before.