Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
RRand40

It's worth pointing out that grey in the post refers to

a fact that not even the worst ideologues know how to spin as "supporting" their side".

I'm not sure what good grey rolls do in the context of this post (especially given the proviso that "there is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance").

But grey rolls are, of course, important: Grey facts and grey issues are uncorrupted by the Great War, and hence are that much more accessible/tractable. The more grey facts there are, the better rationalists we can be.

With respect to your comment, the presence of Grey, Yellow, Orange and Purple Teams would actually help things substantially -- if I report facts from the six teams equally, it's harder to label me as a partisan. (And it's harder for any team to enforce partisanship.) Even if Blue-supporting facts truly are taboo (Green is unlikely to have more than one archnemesis), that's much less limiting when only a sixth of facts are Blue. It's a nice advantage of multipolar politics.

RRand120

There's something strange about the analysis posted.

How is it that 100% of the general population with high (>96%) confidence got the correct answer, but only 66% of a subset of that population? Looking at the provided data, it looks like 3 out of 4 people (none with high Karma scores) who gave the highest confidence were right.

(Predictably, the remaining person with high confidence answered 500 million, which is almost the exact population of the European Union (or, in the popular parlance "Europe"). I almost made the same mistake, before realizing that a) "Europe" might be intended to include Russia, or part of Russia, plus other non-EU states and b) I don't know the population of those countries, and can't cover both bases. So in response, I kept the number and decreased my confidence value. Regrettably, 500 million can signify both tremendous confidence and very little confidence, which makes it hard to do an analysis of this effect.)

RRand90

True, though they forgot to change the "You may make my anonymous survey data public (recommended)" to "You may make my ultimately highly unanonymous survey data public (not as highly recommended)".

RRand60

(With slightly more fidelity to Mr. Pascal's formulation:)

You have nothing to lose.

You have much to get. God can give you a lot.

There might be no God. But a chance to get something is better than no chance at all.

So go for it.

RRand10

So let’s modify the problem somewhat. Instead of each person being given the “decider” or “non-decider” hat, we give the "deciders" rocks. You (an outside observer) make the decision.

Version 1: You get to open a door and see whether the person behind the door has a rock or not. Winning strategy: After you open a door (say, door A) make a decision. If A has a rock then say “yes”. Expected payoff 0.9 1000 + 0.1 100 = 910 > 700. If A has no rock, say “no”. Expected payoff: 700 > 0.9 100 + 0.1 1000 = 190.

Version 2: The host (we’ll call him Monty) randomly picks a door with a rock behind it. Winning strategy: Monty has provided no additional information by picking a door: We knew that there was a door with a rock behind it. Even if we predicted door A in advance and Monty verified that A had a rock behind it, it is no more likely that heads was chosen: The probability of Monty picking door A given heads is 0.9 1/9 = 0.1 whereas the probability given tails is 0.1 1. Hence, say “no”. Expected Payoff: 700 > 0.5 100 + 0.5 1000 = 550.

Now let us modify version 2: During your sleep, you are wheeled into a room containing a rock. That room has a label inside, identifying which door it is behind. Clearly, this is no different than version 2 and the original strategy still stands.

From there it’s a small (logical) jump to your consciousness being put into one of the rock-holding bodies behind the door, which is equivalent to our original case. (Modulo the bit about multiple people making decisions, if we want we can clone you consciousness if necessary and put it into all rock possessing bodies. In either case, the fact that you wind up next to a rock provides no additional information.)

This question is actually unnecessarily complex. To make this easier, we could introduce the following game: We flip a coin where the probability of heads is one in a million. If heads, we give everyone on Earth a rock, if tails we give one person a rock. If the rock holder(s) guesses how the coin landed, Earth wins, otherwise Earth loses. A priori, we very much want everyone to guess tails. A person holding the rock would be very much inclined to say heads, but he’d be wrong. He fails to realize that he is in an equivalence class with everyone else on the planet, and the fact that the person holding the rock is himself carries no information content for the game. (Now, if we could break the equivalence class before the game was played by giving full authority to a specific individual A, and having him say “heads” iff he gets a rock, then we would decrease our chance of losing from 10^-6 to (1-10^-6) * 10^-9.)

RRand20

A few thoughts:

I haven't strongly considered my prior on being able to save 3^^^3 people (more on this to follow). But regardless of what that prior is, if approached by somebody claiming to be a Matrix Lord who claims he can save 3^^^3 people, I'm not only faced with the problem of whether I ought to pay him the $5 - I'm also faced with the question of whether I ought to walk over to the next beggar on the street, and pay him $0.01 to save 3^^^3 people. Is this person 500 times more likely to be able to save 3^^^3 people? From the outset, not really. And giving money to random people has no prior probability of being more likely to save lives than anything else.

Now suppose that the said "Matrix Lord" opens the sky, splits the Red Sea, demonstrates his duplicator box on some fish and, sure, creates a humanoid Patronus. Now do I have more reason to believe that he is a Time Lord? Perhaps. Do I have reason to think that he will save 3^^^3 lives if I give him $5? I don't see convincing reason to believe so, but I don't see either view as problematic.

Obviously, once you're not taking Hanson's approach, there's no problem with believing you've made a major discovery that can save an arbitrarily large number of lives.

But here's where I noticed a bit of a problem in your analogy: In the dark matter case you say ""if these equations are actually true, then our descendants will be able to exploit dark energy to do computations, and according to my back-of-the-envelope calculations here, we'd be able to create around a googolplex people that way."

Well, obviously the odds here of creating exactly a googolplex people is no greater than one in a googolplex. Why? Because those back of the hand calculations are going to get us (at best say) an interval from 0.5 x 10^(10^100) to 2 x 10^(10^100) - an interval containing more than a googolplex distinct integers. Hence, the odds of any specific one will be very low, but the sum might be very high. (This is simply worth contrasting with your single integer saved of the above case, where presumably your probabilities of saving 3^^^3 + 1 people are no higher than they were before.)

Here's the main problem I have with your solution:

"But if I actually see strong evidence for something I previously thought was super-improbable, I don't just do a Bayesian update, I should also question whether I was right to assign such a tiny probability in the first place - whether it was really as complex, or unnatural, as I thought. In real life, you are not ever supposed to have a prior improbability of 10^-100 for some fact distinguished enough to be written down, and yet encounter strong evidence, say 10^10 to 1, that the thing has actually happened."

Sure you do. As you pointed out, dice rolls. The sequence of rolls in a game of Risk will do this for you, and you have strong reason to believe that you played a game of Risk and the dice landed as they did.

We do probability estimates because we lack information. Your example of a mathematical theorem is a good one: The Theorem X is true or false from the get-go. But whenever you give me new information, even if that information is framed in the form of a question, it makes sense for me to do a Bayesian update. That's why a lot of so-called knowledge paradoxes are silly: If you ask me if I know who the president is, I can answer with 99%+ probability that it's Obama, if you ask me whether Obama is still breathing, I have to do an update based on my consideration of what prompted the question. I'm not committing a fallacy by saying 95%, I'm doing a Bayesian update, as I should.

You'll often find yourself updating your probabilities based on the knowledge that you were completely incorrect about something (even something mathematical) to begin with. That doesn't mean you were wrong to assign the initial probabilities: You were assigning them based on your knowledge at the time. That's how you assign probabilities.

In your case, you're not even updating on an "unknown unknown" - that is, something you failed to consider even as a possibility - though that's the reason you put all probabilities at less than 100%, because your knowledge is limited. You're updating on something you considered before. And I see absolutely no reason to label this a special non-Bayesian type of update that somehow dodges the problem. I could be missing something, but I don't see a coherent argument there.

As an aside, the repeated references to how people misunderstood previous posts are distracting to say the least. Couldn't you just include a single link to Aaronson's Large Numbers paper (or anything on up-arrow notation, I mention Aaronson's paper because it's fun)? After all, if you can't understand tetration (and up), you're not going to understand the article to begin with.