Comment author: 75th 23 March 2012 02:27:05AM *  5 points [-]

CHAPTER 80 SPOILERS BELOW

Well. We have five days to think of something. This seems to mean that Harry will think of something, and we have five days to guess what it may be. Presumably it will be something in one of the following categories:

  • Something about Lucius Malfoy
  • Something about the Wizengamot
  • Something about the laws of magical Britain
  • Anything about some person or thing within range of his vision

I propose we start by making a list of everything in the courtroom:

  • Three Aurors
    • One of whom is named Gawain Robards
  • A dementor
  • Minerva McGonagall
  • Harry Potter
    • And everything in his pouch
  • A Prophet reporter
  • Dolores Umbridge
  • Lucius Malfoy
  • Augusta Longbottom
  • Dumbledore
  • A man with a scarred face sitting next to Lucius; Fenrir Greyback?
  • Amelia Bones

What do we know about any of these people that Harry might use to sway the crowd?

Comment author: Bongo 23 March 2012 04:30:16AM 5 points [-]

Harry didn't hear Hermione's testimony. Therefore, he can go back in time and change it to anything that would produce the audience reaction he saw, without causing paradox.

Comment author: Mitchell_Porter 17 March 2012 03:12:03PM 12 points [-]

I see this post is gathering downvotes (-3 so far) but no comments at all. It would be helpful if someone managed to put their reaction into words, and not just into a downvote.

Perhaps the "scenario" seems arbitrary or the purpose of the post is obscure. To some extent I was just musing aloud on the implications of a new fact. I knew intellectually that the NSA has its billion-dollar budgets and its thousands of PhD mathematicians, and the creation of AI in a secret military project is a standard fictional trope. But to hear about this specific facility concretized everything for me, and stirred my imagination.

My whimsy about a clique of singularitarian Mormon computer scientists may be somewhat arbitrary. But consider this: who is more likely to create the first AGI - the Singularity Institute, or the National Security Agency? The answer to that can hardly be in doubt. The NSA's mission is to stay ahead of everyone else in matters like code-breaking and quantitative data analysis. They have to remain number one in theoretical computer science, and they have a budget of billions with which to accomplish that goal.

So if the future hinges on the value system of the first AI, then what goes on inside the NSA is far more important than what goes on at singinst.org. The Singularity Institute may have adopted a goal - design and create friendly AI - which, according to the Institute's own philosophy, means that they would determine the future of the human race; and some of the controversy about the Institute, its methods, personalities, etc, is coming about because of this. But if you accept the philosophy, then the NSA is surely the number-one candidate to actually decide the fate of the world. Outsiders will not get to decide what happens; the most we can reasonably hope to do is to make correct observations that might be noticed and taken into account by the people on the inside who will, for better or worse, make the fateful decisions.

Of course it is theoretically possible that Google, IBM, the FSB, Japan's biggest supercomputer... will instead be ground zero for the intelligence explosion. But I would think that the NSA is well ahead of all of them.

Comment author: Bongo 17 March 2012 06:16:33PM 3 points [-]

I almost downvoted this because when I clicked on it from my RSS reader, it appeared to have been posted on main LW instead of discussion (known bug). This might be the reason for a lot of mysterious downvoting, actually.

Comment author: Bongo 17 March 2012 02:28:47AM *  1 point [-]

(Bug report: I was sent to this post via this link, and I see MAIN bolded above the title instead of DISCUSSION. The URL is misleading too, shouldn't urls of discussion posts contain "/r/discussion/" instead of "/lw"?)

(EDIT: Grognor just told me that "every discussion post has a main-style URL that bolds MAIN")

Comment author: timtyler 15 March 2012 11:25:32AM *  15 points [-]

Perhaps consider adding the high fraction of revenue that ultimately goes to paying staff wages to the list.

Oh yes, and fact that the leader wants to SAVE THE WORLD.

Comment author: Bongo 15 March 2012 10:00:32PM *  5 points [-]

fraction of revenue that ultimately goes to paying staff wages

About a third in 2009, the last year for which we have handy data.

Comment author: Bongo 15 March 2012 02:15:18PM 1 point [-]

Snape says this in both MoR and the original book:

"I can teach you how to bottle fame, brew glory, even stopper death"

Isn't this silly? Of course you can stopper death, because duh, poisons exist.

It might be just a slip-up in the original book, but I'm hoping it will somehow make sense in MoR. My first thought was that maybe a magical death potion couldn't be stopped using magical healing, unlike non-magical poisons.

I asked this on IRC and got some interesting ideas. feep thought it might mean that you can make a Potion of Dementor, which would fit since dementors are avatars of death in MoR and stoppering death would be actually impressive if it meant that. Orionstein suggested it might be a potion made from eg. a bullet that's killed someone, which, given what we know of how potions work from chapter 78, might also result in a potion with deathy effects above and beyond just those of poison.

Comment author: Wei_Dai 21 February 2012 11:43:58PM 5 points [-]

I intended [...]

But some people seem to have read it and heard this instead [...]

When I write posts, I'd often be tempted to use examples from my own life, but then I'd think:

  1. Do I really just intend to use myself to illustrate some point of rationality, or do I subconsciously also want to raise my social status by pointing out my accomplishments?
  2. Regardless of what I "really intend", others will probably see those examples as boasting, and there's no excuse (e.g., I couldn't any better examples) I can make to prevent that.

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished. I'm not saying that you should do the same since you have different costs and benefits to consider (or I could well be wrong myself and shouldn't care so much about not being seen as boasting), but the fact that people interpret your posts filled with personal examples/accomplishments as being arrogant shouldn't have come as a surprise.

Another point I haven't seen brought up yet is that social conventions seem to allow organizations to be more boastful than individuals. You'd often see press releases or annual reports talking up an organization's own accomplishments, while an individual doing the same thing would be considered arrogant. So an idea to consider is that when you want to boast of some accomplishment, link it to the Institute and not to an individual.

Comment author: Bongo 22 February 2012 08:15:43PM 4 points [-]

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.

You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

Comment author: Will_Newsome 27 December 2011 10:24:15AM -4 points [-]

A really intelligent response,

Er, I thought it was overall pretty lame, e.g. the whole question-begging w.r.t. the 'prior probability of omnibenevolent omnipowerful thingy' thingy (nothing annoys me more than abuses of probability theory these days, especially abuses of algorithmic probability theory). Perhaps you are conceding too much in order to appear reasonable. Jesus wasn't very polite.

By the way, in case you're not overly familiar with the heuristics and biases literature, let me give you a hint: it sucks. At least the results that most folk around her cite have basically nothing to do with rationality. There's some quite good stuff with tons of citations, e.g. Gigerenzer's, but Eliezer barely mentioned it to Less Wrong (as fastandfrugal.com which he endorsed) and therefore as expected Less Wrong doesn't know about it. (Same with interpretations of quantum mechanics, as Mitchell Porter often points out. I really hope that Eliezer is pulling some elaborate prank on humanity. Maybe he's doing it unwittingly.)

Anyway the upshot is that when people tell you about 'confirmation bias' as if it existed in the sense they think it does then they probably don't know what the hell they're talking about and you should ignore them. At the very least don't believe them until you've investigated the literature yourself. I did so and was shocked at how downright anti-informative the field is, and less shocked but still shocked at how incredibly useless statistics is (both Bayesianism as a theoretical normative measure and frequentism as a practical toolset for knowledge acquisition). The opposite happened with the parapsychology literature, i.e. low prior, high posterior. Let's just say that it clearly did not confirm my preconceptions; lolol.

Lastly, towards the esoteric end: All roads lead to Rome, if you'll pardon a Catholicism. If they don't it's not because the world is mad qua mad; it is because it is, alas, sinful. An easy way to get to hell is to fall into a fully-general-counterargument blackhole, or a literal blackhole maybe. Those things freak me out.

(P.S. My totally obnoxious arrogance is mostly just a passive aggressive way of trolling LW. I'm not actually a total douchebag IRL. /recursive-compulsive-self-justification)

Comment author: Bongo 27 December 2011 08:35:10PM *  3 points [-]

I ... was shocked at how downright anti-informative the field is

Explain?

shocked at how incredibly useless statistics is

Explain?

The opposite happened with the parapsychology literature

Elaborate?

Comment author: Will_Newsome 27 December 2011 01:18:03PM *  -6 points [-]

I love how Less Wrong basically thinks that all evidence that doesn't support its favored conclusion is bad because it just leads to confirmation bias. "The evidence is on your side, granted, but I have a fully general counterargument called 'confirmation bias' that explains why it's not actually evidence!" Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn't actually exist. (Eliezer knew about the controversy, which is why his post is titled "Positive Bias", which arguably also doesn't exist, especially not in a cognitively relevant way.) Then they talk about Occam's razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It's like they're trolling and I'm not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.

Comment author: Bongo 27 December 2011 08:20:25PM 1 point [-]

algorithmic probability ... does not say that naturalistic mechanistic universes are a priori more probable!

Explain?

Comment author: Will_Newsome 27 December 2011 01:18:03PM *  -6 points [-]

I love how Less Wrong basically thinks that all evidence that doesn't support its favored conclusion is bad because it just leads to confirmation bias. "The evidence is on your side, granted, but I have a fully general counterargument called 'confirmation bias' that explains why it's not actually evidence!" Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn't actually exist. (Eliezer knew about the controversy, which is why his post is titled "Positive Bias", which arguably also doesn't exist, especially not in a cognitively relevant way.) Then they talk about Occam's razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It's like they're trolling and I'm not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.

Comment author: Bongo 27 December 2011 08:19:45PM *  2 points [-]

confirmation bias ... doesn't actually exist.

Explain?

In response to comment by [deleted] on Welcome to Less Wrong!
Comment author: XangLiu 19 December 2011 06:46:26PM 30 points [-]

The point has been missed. Deep breath, paper-machine.

Nearly any viewpoint is capable of and has done cruel things to others. No reason to unnecessarilly highlight this fact and dramatize the Party of Suffering. This was an intro thread by a newcomer - not a reason to point to you and "your" people. They can speak for themselves.

Comment author: Bongo 19 December 2011 06:57:48PM *  5 points [-]

I wonder how this comment got 7 upvotes in 9 minutes.

EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.

View more: Next