Comment author: common_law 18 October 2014 11:15:27PM *  0 points [-]

What about the problem that if you admit that logical propositions are only probable, you must admit that the foundations of decision theory and Bayesian inference are only probable (and treat them accordingly)? Doesn't this leave you unable to complete a deduction because of a vicious regress?

Comment author: common_law 25 July 2014 06:02:34PM 1 point [-]

A critical mistake in the lead analysis is false assumption: where there is a causal relation between two variables, they will be correlated. This ignores that causes often cancel out. (Of course, not perfectly, but enough to make raw correlation a generally poor guide to causality.

I think you have a fundamentally mistaken epistemology, gwern: you don't see that correlations only support causality when they are predicted by a causal theory.

Comment author: common_law 10 July 2014 08:16:08PM 4 points [-]

“how else could this correlation happen if there’s no causal connection between A & B‽”

The main way to correct for this bias toward seeing causation where there is only correlation follows from this introspection: be more imaginative about how it could happen (other than by direct causation).

[The causation bias (does it have a name?) seems to express the availability bias. So, the corrective is to increase the availability of the other possibilities.]

Comment author: common_law 06 June 2014 05:35:47PM -6 points [-]

You link ego depletion (willpower depletion and decision fatigue alternative terms) to working memory, based on neuroscientists refusal to reify the former. But you neglect that, neuroscientists would deny that working memory is "a thing."

The psychological findings are robust. That the first-proposed physiological explanation is dubious doesn't disqualify the phenomena.

The opponent process theory in mentioned in the wikipedia article is promising.

Who says willpower limitations are a function of limited capacity? This is something of an engineer's favored explanation. A better explanation is probably rooted in evolutionary psychology rather than inherent capacity limitations. We evolved with opponent processes governing control and gratification.

Attributing willpower depletion to "distraction" isn't an explanation. Distraction probably has causal relevance, but it isn't a magic wand to wave away psychological findings.

The OP is amateurish.

Comment author: Luke_A_Somers 28 May 2014 04:16:44PM 1 point [-]

It seems to me that you've pretty much solved the literal interpretation of the hypothetical, putting into words what everyone was already thinking.

The more relevant one is where you generated the probability and utility estimates yourself and are trying to figure out what to do about it.

Comment author: common_law 31 May 2014 11:29:28PM 0 points [-]

I intended nothing more than to solve the literal interpretation. This isn't my beaten path. I don't intend more on the subject besides speculation about why an essentially trivial problem of "literal interpretation" has resisted articulation.

Comment author: Strilanc 28 May 2014 03:02:09AM 15 points [-]

"Solving" Pascal's Mugging involves giving an explicit reasoning system and showing that it makes the right decision.

It's not enough to just say "your confidence has to go down more than their claimed reward goes up". That part is obvious. The hard part is coming up with actual explicit rules that do that. Particularly ones that don't fall apart in other situations (e.g. the decision system "always do nothing" can't be pascal-mugged, but has serious problems).

Another thing not addressed here is that the mugger may be a hypothetical. For example, if the AI generates hypotheses where the universe affects 3^^^^3 people then all decisions will be dominated by these hypotheses because their outcomes outweigh their prior by absurd margins. How do you detect these bad hypotheses? How do you penalize them without excluding them? Should you exclude them?

Please give a more concrete situation with actual numbers and algorithms.

Comment author: common_law 31 May 2014 06:26:54PM 1 point [-]

I think you'll find the argument is clear without any formalization if you recognize that it is NOT the usual claim that confidence goes down. Rather, it's that the confidence falls below its contrary.

In philH's terms, you're engaging in pattern matching rather than taking the argument on its own terms.

Comment author: DanielLC 28 May 2014 01:09:27AM 2 points [-]

What you present is the basic fallacy of Pascal's Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.

The prior probability of X is 2^-(K-complexity of X). There are more possible universes where they carry out smaller threats, so the K-complexity is lower. What I showed is that, even if there were only a single possible universe where the threat was carried out, it's still simple enough that the K-complexity is small enough that it's worth paying the threatener.

No commenters have engaged the argument!

You gave a vague argument. Rather than giving a vague counterargument along the same lines, I just ran the math directly. You can argue that P(C|E) decreases all you want, but since I found that the actual value is still too high, it clearly doesn't decrease fast enough.

If you want the vague counterargument, it's simple: The probability that it's a lie approaches unity. It just doesn't approach it fast enough. It's a heck of a lot less likely that someone who threatens 3^^^3 lives is telling the truth than someone who's threatening one. It's just not 3^^^3 times less likely.

Comment author: common_law 31 May 2014 06:24:58PM *  0 points [-]

What you're ignoring is the comparison probability. See philH's comment.

Comment author: philh 27 May 2014 06:12:53PM 2 points [-]

So I was pattern matching this as an argument for "why the probability decreases more than we previously acknowledged, as the threat increases", but that isn't what you're going for. Attempting to summarize it in my own words:

There are three relevant events: (A) the threat will not happen; (B) not giving in to blackmail will trigger the threat; (C) giving in to blackmail will trigger the threat (or worse). As the threat increases, P(B) and P(C) both decrease, but P(C) begins to dominate P(B).

Is this an accurate summary?

Comment author: common_law 27 May 2014 06:34:08PM *  0 points [-]

It's accurate. But it's crucial, of course, to see why P(C) comes to dominate P(B), and I think this is what most commenters have missed. (But maybe I'm wrong about that; maybe its because of pattern matching.) As the threat increases, P(C) comes to dominate P(B) because the threat, when large enough, is evidence against the threatened event occurring.

Comment author: Slider 27 May 2014 04:56:57PM 0 points [-]

This solutions seems to implicitly assume that all real needs are pretty moderate. That is the only plausible reason to state a meganumber class high utility is to beat someone elses number. A matrix lord could have a different sense of time in their "home dimension" and for all I know what I do might be the culmination of bet involving the creation and destruction of whole classes of universes.

Also why what the mugger says have anythinhg to do how big of a threat the conversation is? If someone would come up to me and say "give me 5$ or I will do everything in my power to screw with you" without saying how much screwage would happen or what their power level is, it wouldn't seem that problematic. Why would the threat be more potent with stating that information? It seems it should limit the threat it poses. It would appear to me that what people say only has relevance in the limits on how I have established agenthood, ie it maxes out. If I would really be on the look out for freaky eldrich gods I should worry that the sky would fall on me - ie there is no need for the sky/might to state any threat for them to be one.

Comment author: common_law 27 May 2014 05:34:50PM *  0 points [-]

That is [it is assumed that] the only plausible reason to state a meganumber class high utility is to beat someone elses number.

It's the only reason that doesn't cancel out because it's the only one about which we have any knowledge. The higher the number, the more likely it is that the mugger is playing the "pick the highest number" game. You can imagine scenarios in which picking the highest number has some unknown significance, they cancel out, in the same way as Pascal's God is canceled by the possibility of contrary gods.

Also why what the mugger says have anythinhg to do how big of a threat the conversation is?

Same question (formally) as why should failure to confirm a theory be evidence against it.

Comment author: DanielLC 27 May 2014 04:22:48AM 1 point [-]

Short version:

Even just considering the a priori probability of them honestly threatening you before you even take into account that they threatened you, it's still enough to get Pascal mugged. The probability that they're lying increases, but not fast enough.

Long version, with math:

Consider three possibilities:

A: Nobody makes the threat

B: Someone dishonestly makes the threat

C: Someone honestly makes the threat

Note that A, B, and C form a partition. That is to say exactly one of them is true. Technically it's possible that more than one person makes the threat, and only one of them is honest, so B and C happen, so let's say that the time and place of the threat are specified as well.

C is highly unlikely. If you tried to write a program simulating that, it would be long. It would still fit within the confines of this universe. Let's be hyper-conservative, and call it K-complexity of a googol.

There is also a piece of evidence E, consisting of someone making the threat. E is consistent with B and C, but not A. This means:

P(A&E) = 0, P(B&E) = P(B), P(C&E) = P(C)

Calculating from here:

P(C|E) = P(C&E)/P(E)

= P(C)/P(E)

P(C)

= 2^-googol

So the probability of the threatener being honest is at least 2^-googol. Since the disutility of not paying him if he is honest more than 2^googol times the utility of paying him, it's worthwhile to pay.

Comment author: common_law 27 May 2014 05:14:48PM 2 points [-]

What you present is the basic fallacy of Pascal's Mugging: treating the probability of B and of C as independent the fact that a threat of given magnitude is made.

Your formalism, in other words, doesn't model the argument. The basic point is that Pascal Mugging can be solved by the same logic as succeeds with Pascal's wager. Pascal ignored that believing in god A was instrumentally rational by ignoring that there might, with equal consequences, be a god B instead who hated people who worshiped god A.

Pascal's Mugging ignores that giving to the mugger might cause the calamity threatened to be more likely if you accede to the mugger than if you don't. The point of inflection is that point where the mugger's making the claim becomes evidence against it rather than for it.

No commenters have engaged the argument!

View more: Prev | Next