Comment author: see 30 March 2012 07:10:26AM 10 points [-]

Let's assume that Hermione had actually been sentenced to Azkaban. How many advantages would Quirrelmort have gained?

  • Discredited Dumbledore somewhat with a student almost being killed
  • Directly eliminated a Light-side witch showing skill at military command and Battle Magic
  • Made Harry more vulnerable by knocking out an ally/friend/moral compass
  • Driven a wedge between Harry and House Malfoy, eliminating Draco as an ally/friend and ensuring no Malfoy-Potter alliance could form against a resurgent Voldemort
  • Broken the Dumbledore-Harry alliance forever if Dumbledore actually let Hermione go to Azkaban; otherwise force Dumbledore to go into open rebellion against the law.
  • Made Harry take the majority of the Wizengamot as enemies who needed to be punished, both encouraging him to become darker and the members to have reason to be hostile to Harry in turn.
  • Provoked Harry into a (possibly) suicidal effort to destroy Azkaban, which (possibly) could enable a mass breakout of Voldemort supporters from same.
  • Isolated Magical Britain from the rest of the wizarding world for sentencing a child to Azkaban.
  • Delegitimized the Wizengamot in the eyes of everyone in Magical Britain horrified at the sentence.

There may be more that aren't coming to mind, but, well, the potential payoffs for Quirrelmort were pretty high.

Comment author: Brickman 31 March 2012 01:31:04AM 2 points [-]

I don't think Harry actually would have taken Dumbledore as an enemy if Dumbledore failed to save Hermione, as he clearly was trying and even using up political capitol. Only having Dumbledore stand in the way of Harry saving her would do that, and when Dumbledore realized just how determined Harry was he had the sense to step aside.

Also I'm not really sure how well "Delegitimized the Wizengamot in the eyes of Magical Britain" would have worked--rest of the world yes, but the papers were certainly doing a hatchet job on her. The question is how representative of the populace is the press? Obviously the biggest papers is Lucius's and Fudge's soapbox here and in canon, but there's more than one paper in those newsstands and dissent isn't illegal until the death eaters take over in the last few books. I'm going to go with "not at all representative of public opinion", but propaganda exists because it works and they sounded prepared to present a unified front.

The rest, though, sound like things he could have planned on and represent MASSIVE gains for Voldemort. I especially like the "Isolated Magical Britain from the rest of the wizarding world" one--I didn't even think of it, but it fits. He didn't just get rid of Hermione, he goaded his enemies into committing an atrocity against her.

Comment author: Brickman 21 December 2011 04:34:43AM *  3 points [-]

I'm not sure it's appropriate to consider the money the average human will accept for a micromort as a value that's actually useful for making rational decisions, because that's a value that's badly skewed by irrational biases. Actions are mentally categorized into those the thinker does and doesn't believe (on a subconscious level) to possibly lead to death. I doubt the average person even considers a "risk" factor at all when driving their car or walking several blocks to the car (just a time factor and a gasoline factor), unless their trip takes them through a "bad" neighborhood, in which case they'll inflate their perceived risk severalfold without actually looking up that neighborhood's crime rates (moreso if they know someone who was hurt in a manner similar to that). They're probably quite likely to consider a "getting a ticket" risk factor, however. It's sadly true that most people believe themselves invincible and completely ignore many categories of existential risk, thinking only of the "flashier" risks and likely inflating their likelihood. And if you told someone that you would give them $100 and then use a fair RNG and shoot them either on a 1 in 10,000 or 1 in 100,000 chance, I doubt you'd get very different responses.

And I'm going to be so bold as to declare that it's impossible for ANY individual to accurately judge the relative likelihood of two things to kill you without looking it up; "which is more likely" is doable but "is it twice as likely or three times" is not.

edit: The end result of everything I just said is that the "value" being assigned to a micromort is probably more a reflection of how the EPA ran their test than what people really value; they'd get a different result evaluating people's aversion to micromorts via car crash and people's aversion to micromorts via being mugged, and either would be skewed if they first spent a half hour talking about ways to mitigate such a risk (thus reminding you it's there).

Comment author: Dan_Moore 30 November 2011 11:32:24PM 1 point [-]

I'm planning on doing a statistical study with a sample size of 21 companies. This is a financial study, and the companies chosen are the only ones that will be reporting their 2011 financial results on a certain basis necessary for the testing. (Hence the sample size.)

I'm going to do this regardless of which hypothesis is supported (the null hypothesis, my alternative hypothesis, or neither). So, I'm promising an absence of publication bias. (The null hypothesis will be supported by a finding of little or no correlation; my alternative hypothesis by a negative correlation.)

In this case, the small sample size is the result of available data, and not the result of data-mining. If the results are statistically significant and have a sizable effect, I'm of the opinion that the conclusions will be valid.

Comment author: Brickman 01 December 2011 03:03:57AM 4 points [-]

Sadly, your commitment to this goal is not enough, unless you also have a guarantee that someone will publish your results even if they are statistically insignificant (and thus tell us absolutely nothing). I admit I've never tried to publish something, but I doubt that many journals would actually do that. If they did the result would be a journal rendered almost unreadable by the large percentage of studies it describes with no significant outcome, and would remain unread.

If your study doesn't prove either hypothesis, or possibly even if it proves the null hypothesis and that's not deemed to be very enlightening, I expect you'll try and fail to get it published. If you prove the alternative hypothesis, you'll probably stand a fair chance at publication. Publication bias is a result of the whole system, not just the researchers' greed.

The only way I can imagine a study that admits that it didn't prove anything could get publication is if it was conducted by an individual or group too important to ignore even when they're not proving anything. Or if there's so few studies to choose from that they can't pick and choose the important ones, although fields like that would probably just publish fewer journals less frequently.

Comment author: Brickman 10 October 2011 01:19:17AM 6 points [-]

I like the first two, and the chess one's pretty interesting though I can't imagine I'd have an easy time getting someone to stand still long enough to hear the whole thing as an argument. But I don't really like the last one. You've been tricked into accepting his premise, that death lets you create more meaningful art, and trying to regain ground from there. It's that premise itself that you should be arguing against--point out all the great literature and art that isn't about death, and that you could still have all of that once death was gone. Also point out that to someone with cancer today the availability of art is probably less valuable than the availability of a cure would be, and there's no reason to assume that'll change if you double his age, even if you double it several times.

Comment author: Manfred 28 September 2011 04:41:15AM 2 points [-]

Or am I missing some key factor here? Did I misinterpret the lesson?

The key factor is that the 60,20 box is not in isolation - it is the top box, and so not only do you expect it to have more "signal" (gold) than average, you also expect it to have more noise than average.

You can think of the numbers on the boxes as drawn from a probability distribution. If there was 0 noise, this probability distribution would just be how the gold in the boxes was distributed. But if you add noise, it's like adding two probability distributions together. If you're not familiar with what happens, go look it up on wikipedia, but the upshot is that the combined distribution is more spread out than the original. This combined distribution isn't just noise or just signal, it's the probability of having some number be written on the outside of the box.

And so if something is the top, very highest box, where should it be located on the combined distribution?

Now, if you have something that's high on the combined distribution, how much of that is due to signal, and how much of it is due to noise? This is a tougher question, but the essential insight is that the noise shouldn't be more improbable than the signal, or vice versa - that is, they should both be about the same number of standard deviations from their means.

This means that if the standard deviation of the noise is bigger, then the probable contribution of the noise is greater.

Me saying the same thing a different way can be found here.

Comment author: Brickman 28 September 2011 12:15:15PM 1 point [-]

Oh, I understand now. Even if we don't know how it's distributed, if it's the top among 9 choices with the same variance that puts it in the 80th percentile for specialness, and signal and noise contribute to that equally. So it's likely to be in the 80th percentile of noise.

It might have been clearer if you'd instead made the boxes actually contain coins normally distributed about 40 with variance 15 and B=30, and made an alternative of 50/1, since you'd have been holding yourself to more proper unbiased generation of the numbers and still, in all likelihood, come up with a highest-labeled box that contained less than the sure thing. You have to basically divide your distance from the norm by the ratio of specialness you expect to get from signal and noise. The "all 45" thing just makes it feel like a trick.

Comment author: Manfred 17 September 2011 03:44:30PM 5 points [-]

In any group there's going to be random noise, and if you choose an extreme value, chances are that value was inflated by noise. In Bayesian, given that something has the highest value, it probably had positive noise, not just positive signal. So the correction is to correct out the expected positive noise you get from explicitly choosing the highest value. Naturally, this correction is greater for when the noise is bigger.

So imagine choosing between black boxes. Each black box has some number of gold coins in it, and also two numbers written on it. The first number, A, on the box is like the estimated expected value, and the second number, B, is like the variance. What happened is that someone rolled two distinct dice with B sides, subtracted die 1 from die 2, and added that to the number of gold coins in the box.

So if you see a box with 40, 3 written on it, you know that it has an expected value of 40 gold coins, but might have as few as 37 or as many as 43.

Now comes the problem: I put 10 boxes in front of you, and tell you to choose the one with the most gold coins. The first box is 50, 1 - a very low-variance box. But the last 9 boxes are all high-uncertainty, all with B=20. The expected values printed on them are as follows [I generated the boxes honestly] : 53, 52, 37, 60, 44, 36, 56, 45, 54. Ooh, one of those boxes has a 60 on it! Pick that one!

Okay, don't pick that one. Think about it - there are 9 boxes with high variance, and the one you picked probably has unusually large noise. To be special among 9 proposals with high variance, it probably has noise at the 80th+ percentile. What's the 80th percentile of noise for 1d20 - 1d20? I bet it's larger than 10. You're better off just going with the 50, 1 box.

And it's a good thing you applied that correction, because I generated the boxes by typing "RandomInteger[20,9] - RandomInteger[20,9] + 45" into Wolfram alpha - they each 45 coins each.

So this illustrates that what beating the optimizer's curse really is is a sort of "correction for multiple comparisons." If you have a lot of noisy boxes, some of them will look large even when they're not, even larger than non-noisy boxes.

Comment author: Brickman 28 September 2011 01:58:06AM 0 points [-]

I'm trying to figure out why, from the rules you gave at the start, we can assume that box 60 has more noise than the other boxes with variance of 20. You didn't, at the outset of the problem, say anything about what the values in the boxes actually were. I would not, taking this experiment, have been surprised to see a box labeled "200", with a variance of 20, because the rules didn't say anything about values being close to 50, just close to A. Well, I would've been surprised with you as a test-giver, but it wouldn't have violated what I understood the rules to be and I wouldn't have any reason to doubt that box was the right choice.

The box with 60 stands out among the boxes with high variance, but you did not say that those boxes were generated with the same algorithm and thus have the same actual value. In fact you implied the opposite. You just told me that 60 was an estimate of its expected value, and 37 was an estimate of one of the other boxes' expected values. So I would assign a very high probability to it being worth more than the box labeled 37. I understand that the variance is being effectively applied twice to go between the number on the box to the real number of coins (The real number of 45 could make an estimate anywhere from 25 to 65, but if it hit 25 I'd be assigning the real number a lower bound of 5 and if it hit 65 I'd be assigning the real number an upper bound of 85, which is twice that range). (Actually for that reason I'm not sure your algorithm really means there's a variance of 20 from what you state the expected value to be, but I don't feel like doing all the math to verify that since it's tangential to the message I'm hearing from you or what I'm saying). But that doesn't change the average. The range of values that my box labeled 60 could really contain from being higher than the range the box labeled 37 could really contain, to the best of my knowledge, and both are most likely to fall within a couple coins of the center of that range, with the highest probability concentrated on the exact number.

If the boxes really did contain different numbers of coins, or we just didn't have reason to assume that they don't contain different numbers, the box labeled 60 is likely to contain more coins than that 50/1 box did. It is also capable of undershooting 50 by ten times as much if unlucky, so if for some reason I absolutely cannot afford to find less than 50 coins in my box the 50/1 box is the safer choice--but if I bet on the 60/20 box 100 times and you bet on the 50/1 box 100 times, given the rules you set out in the beginning, I would walk away with 20% more money.

Or am I missing some key factor here? Did I misinterpret the lesson?

Comment author: JoshuaZ 27 September 2011 03:15:13AM 1 point [-]

Loophole: Harry doesn't want to use the stone, he wants to reverse engineer it, and mass produce more. So he can easily commit to not using the stone.

Comment author: Brickman 27 September 2011 12:30:51PM 8 points [-]

The problem is, Dumbledore's not going to tell Harry what the condition is for getting the stone. Why would he? He didn't tell canon Quirrell, who was standing there trying to figure out why he couldn't get it. He didn't even tell canon Harry until after the fact. The mirror as a screening process works even better if the person being screened doesn't know what it's testing for, and thus can't fake it.

And Harry would want to use the stone, make no mistake. The first thing he'd do with it is make himself immortal, to make sure no accident or fluke could stop him from having time to mass produce the immortality elixir. And he'd be using it for study anyways. But the most important part is that even if he is capable of precommitting and one-boxing, and even if that kind of trick fools the mirror, he'd first need to know that that was the condition necessary to obtain the stone. And you can probably count the number of people Dumbledore trusts with that information on one hand.

Comment author: Asymmetric 25 September 2011 06:35:12PM 5 points [-]

That brings up another point. In the Philosopher's Stone, Dumbledore enchants Erised so that only those who want to find the stone, but not use it, would be able to have it. If Dumbledore did in fact hide the stone in Hogwarts, I can't see either Harry or Quirrell not wanting to use the stone.

Is it even possible for Dumbledore to hide anything in such a way that Harry can get at it, but Quirrell cannot? Harry's major ideal difference -- his war against death -- isn't even understood by Dumbledore. Not to mention that such a hiding place would have been constructed before Dumbledore even met Harry.

Comment author: Brickman 27 September 2011 02:55:34AM 7 points [-]

I think you hit on a key point that several are missing--Dumbledore wouldn't want HJEPV to have the stone any more than Quirrell (well, maybe a little more, but certainly less than nobody having it or even than handing it off to, say, some random Hufflepuff). In canon Harry didn't just not want to use it, he didn't want it used--that was his entire motivation for getting it. Rational Harry would, probably quite literally given enough time to think on the situation, kill to use it, and use it repeatedly. And Dumbledore knows this.

Canon Harry was, in fact, a person Dumbledore would be willing to loan the stone to if necessary. Rational Harry is not. The mirror actually represents a pretty effective screening process for who does and doesn't fall in that category, especially combined with what in theory should have been a screening test to ensure you were a capable enough wizard to protect it and/or had the approval of several people he trusted in a more general capacity. In fact now that I say that, it suddenly seems plausible that the mirror wasn't in any way tied to how it was hidden, and instead was just the trigger used for retrieving it. In which case, actually, a sufficiently powerful wizard with sufficient time could probably deconstruct the spell and take it by force, simply because no lock is perfect, which is why it still needed to be guarded in the first place and why stopping Quirrell was necessary.

Comment author: Brickman 27 July 2011 01:07:43AM 0 points [-]

Despite having seen you say it in the past, it wasn't until reading this article that in sunk in for me just how little danger we were actually in of Eliezer1997 (or even Eliezer2000) actually making his AI. He had such a poor understanding of the problem, I don't see how he could've gotten there from here without having to answer the question of "Ok, now what do I tell the AI to do?" The danger was in us almost never getting Eliezer2008, or in Eliezer2000 wasting a whole bunch of future-minded peoples' money getting to the point where he realized he was stuck.

Except I suppose he did waste a lot of other people's money and delay present-you by several years. So I guess that danger wasn't entirely dodged after all. And maybe you did have something you planned to tell the AI to do anyways, something simple and useful sounding in and of itself with a tangible result. Probably something it could do "before" solving the question of what morality is, as a warmup. That's what the later articles in this series suggest, at least.

I also peeked at the Creating Friendly AI article just to see it. That, unlike this, looks like the work of somebody who is very, very ready to turn the universe into paperclips. There was an entire chapter about why the AI probably won't ever learn to "retaliate", as if that was one of the most likely ways for it to go wrong. I couldn't even stand to read more than half a chapter and I'm not you.

"To the extent that they were coherent ideas at all" you've said of half-baked AI ideas in other articles. It's nice to finally understand what that means.

Comment author: Brickman 25 July 2011 03:01:57AM *  1 point [-]

I'm kind of surprised at how complicated everyone is making this, because to me the Bayesian answer jumped out as soon as I finished reading your definition of the problem, even before the first "argument" between one and two boxers. And it's about five sentences long:

Don't choose an amount of money. Choose an expected amount of money--the dollar value multiplied by its probability. One-box gets you >(1,000,000*.99). Two-box gets you <(1,000*1+1,000,000*.01). One-box has superior expected returns. Probability theory doesn't usually encounter situations in which your decision can affect the prior probabilities, but it's no mystery what to do when that situation arises--the same thing as always, maximize that utility function.

Of course, while I can be proud of myself for spotting that right away, I can't be too proud because I know I was helped a lot by the fact that my mind was in a "thinking about Eliezer Yudkowsky" mode already, a mode it's not necessarily in by default and might not be when I am presented with a dilemma (unless I make a conscious effort to put it there, which I guess now I stand a better chance of doing). I was expecting for a Bayesian solution to the problem and spotted it even though it wasn't even the point of the example. I've seen this problem before, after all, without the context of being brought up by you, and I certainly didn't come up with that solution at the time.

View more: Prev | Next