Comment author: 12 September 2016 03:57:39PM 0 points [-]

Ok so there's a good chance I'm just being an idiot here, but I feel like a multiple worlds kind of interpretation serves well here. If, as you say, "the coin is deterministic, [and] in the overwhelming measure of the MWI worlds it gives the same outcome," then I don't believe the coin is fair. And if the coin isn't fair, then of course I'm not giving Omega any money. If, on the other hand, the coin is fair, and so I have reason to believe that in roughly half of the worlds the coin landed on the other side and Omega posed the opposite question, then by giving Omega the \$100 I'm giving the me in those other worlds \$1000 and I'm perfectly happy to do that.

Comment author: 29 June 2012 03:52:07PM 14 points [-]

It's amazing, the results people come up with when they don't use TDT (or some other formalism that doesn't defect in the Prisoner's Dilemma - though so far as I know, the concept of the Blackmail Equation is unique to TDT.)

(Because the base case of the pirate scenario is, essentially, the Ultimatum game, where the only reason the other person offers you \$1 instead of \$5 is that they model you as accepting a \$1 offer, which is a very stupid answer to compute if it results in you getting only \$1 - only someone who two-boxed on Newcomb's Problem would contemplate such a thing.)

Comment author: 12 September 2016 03:37:39AM 1 point [-]

So if all pirates implement TDT, what happens?

In response to comment by on Infinite Certainty
Comment author: 20 April 2012 09:47:54PM 0 points [-]

For Godel-Bayes issues, you can start with the responses to my post on the subject. (I've since learned and remembered more about Godel.)

We should have the ability to talk about subjective uncertainty in, at the very least, particular proofs and probabilities. I don't know that we can. But I like the following argument, which I recall seeing here somewhere:

If there exists a perfect probability calculation based on a set of background information, it must take this uncertainty into account. Therefore, applying this uncertainty again to the answer would mean double-counting the evidence, which is strictly verboten. We therefore cannot use this line of reasoning to produce a contradiction. Barring other arguments, we can assume the uncertainty equals a really small fraction.

In response to comment by on Infinite Certainty
Comment author: 20 April 2012 10:13:50PM 0 points [-]

Hrmm... I'm still taking high school geometry, so "infinite set of axioms" doesn't really make sense yet. I'll try to re-read that thread once I've started college-level math.

Comment author: 12 April 2012 01:58:40AM 0 points [-]

Well, if by "no model" you mean something like the contemporary folk model of biology ("Blood is what keeps you alive, we're not quite sure how though, but in general try not to lose your blood"), then elan vital is definitely worse, in that it (a) adds no new information but (b) sounds wiser, and therefore harder to unseat.

Comment author: 20 April 2012 10:00:37PM 0 points [-]

Are you suggesting that we apply a punishment to any theory that sounds wise? Or that we apply a punishment only for those that also satisfy (a)?

In response to comment by on Infinite Certainty
Comment author: 20 April 2012 01:52:12AM -1 points [-]

First we'd have to attach a meaning to the claim, yes? I've seen evidence for various claims about Bayes' Theorem, including but probably not limited to 'Any workable extension of logic to deal with uncertainty will approximate Bayes,' and 'Bayes works better in practice than frequentist methods'. Decide which claim you want to talk about and you'll know what evidence against it would look like.

(Halpern more or less argues against the first one, but I'm looking at his article and so far he just seems to be pointing out Jaynes' most commonsensical requirements.)

In response to comment by on Infinite Certainty
Comment author: 20 April 2012 09:29:20PM -1 points [-]

I intended the claim posed here about tests and priors. It is posed as
p(A|X) = [p(X|A)p(A)]/[p(X|A)p(A) + p(X|~A)*p(~A)]

But does it make sense for that to be wrong? It is a theorem, unlike the statement 2+2=4. Maybe some sort of way to show that the axioms and definitions that are used to prove Baye's Theorem are inconsistent, which is a pretty clear kind of proof. I'm not sure anymore that what I said has meaning. Well, thanks for the help.

In response to The Crackpot Offer
Comment author: 20 April 2012 02:43:03AM 0 points [-]

I don't remember ever coming up with a false disproof in math, though I did manage to "solve" perpetual motion machines. I did successfully prove a trivial result in solving quadratic equations in modular arithmetic.

In response to Infinite Certainty
Comment author: 20 April 2012 01:31:30AM 1 point [-]

Eliezer, what could convince you that Baye's Theorem itself was wrong? Can you properly adjust your beliefs to account for evidence if that adjustment is systematically wrong?

Comment author: 19 April 2012 09:04:08PM *  1 point [-]

I benefit from believing people are nicer than they actually are.

I empathize with her here. I believe that it is in my advantage to act towards people the way I would act if they were nicer than they actually are. I'll try to parse that out. Let's say Alice is talking to Bob. Cindy, at a different time, also talks to Bob. Bob is a jerk; we assume he is not nice.

• Alice honestly expects that Bob is nicer than he actually is, and accordingly she is nice to Bob.
• Cindy honestly expects that Bob is exactly as nice as he actually is, and accordingly she is dismissive of Bob.

I expect that Bob will be nicer towards Alice than towards Cindy. (Warning: This is starting to feel like a belief, suggesting that it is actually a belief in belief.) My theory is that I should act like Alice. Of course, there are alternatives, like simply being to nice to people.

I hope this comment made sense to you. I know I'm pretty confused about it myself now.

Comment author: 18 October 2008 02:08:01AM -1 points [-]

I'm looking for Dark Side epistemology itself - the Generic Defenses of Fail.

In that case - association, essentialism, popularity, the scientific method, magic, and what I'll call Past-ism.

In response to comment by on Dark Side Epistemology
Comment author: 19 April 2012 07:04:14PM 10 points [-]

Wait a second - the scientific method? How? It may not be the most efficient way to get the truth, and it may not take into account Baye's theorem that could speed it up, but I don't see how the scientific method is epistemologically (is that a word?) wrong.

Comment author: 12 April 2012 10:26:23AM 0 points [-]

In one sense, yes I agree it's a charade, but people are non-rational and often very sensitive to the form of things. To me it sounds at least worth trying.

Pondering this further, I think the biggest problem is finding a way to measure conformity even in the face of people knowing they're being tested for conformity.

Comment author: 14 April 2012 11:45:54PM 0 points [-]

Do not have the audience be part of the group being tested. Pull in confederates off the street, and tell them about the test. Do not allow subjects to see each other's testing. Let's say now that the current subject is Alex. Alex prefers vanilla ice cream to chocolate ice cream. Now go through the anti-conformity training.

After the training, hold a break (still with just Alex and the confederates). Offer ice cream in chocolate, vanilla, and, say, mango. Have most (maybe about 80%) of the confederates go for the chocolate, 10% for the vanilla, and 10% for the mango.

The mango should help to decrease the suspicion, as should having not everybody go for the chocolate. It may help to have the confederates go through the training as well, to decrease suspicion.

The problems I see with this are a) Cost. This one I'll ignore, because that is a matter of practicality. b) The subject group is not the group conforming. This will decrease the likelihood of conforming.

The problem with having the subject group be the confederates, is that then the subject group knows how the test is being done.

View more: Next