Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: 75th 23 March 2012 02:27:05AM *  5 points [-]

CHAPTER 80 SPOILERS BELOW

Well. We have five days to think of something. This seems to mean that Harry will think of something, and we have five days to guess what it may be. Presumably it will be something in one of the following categories:

  • Something about Lucius Malfoy
  • Something about the Wizengamot
  • Something about the laws of magical Britain
  • Anything about some person or thing within range of his vision

I propose we start by making a list of everything in the courtroom:

  • Three Aurors
    • One of whom is named Gawain Robards
  • A dementor
  • Minerva McGonagall
  • Harry Potter
    • And everything in his pouch
  • A Prophet reporter
  • Dolores Umbridge
  • Lucius Malfoy
  • Augusta Longbottom
  • Dumbledore
  • A man with a scarred face sitting next to Lucius; Fenrir Greyback?
  • Amelia Bones

What do we know about any of these people that Harry might use to sway the crowd?

Comment author: Bongo 23 March 2012 04:30:16AM 5 points [-]

Harry didn't hear Hermione's testimony. Therefore, he can go back in time and change it to anything that would produce the audience reaction he saw, without causing paradox.

Comment author: Mitchell_Porter 17 March 2012 03:12:03PM 12 points [-]

I see this post is gathering downvotes (-3 so far) but no comments at all. It would be helpful if someone managed to put their reaction into words, and not just into a downvote.

Perhaps the "scenario" seems arbitrary or the purpose of the post is obscure. To some extent I was just musing aloud on the implications of a new fact. I knew intellectually that the NSA has its billion-dollar budgets and its thousands of PhD mathematicians, and the creation of AI in a secret military project is a standard fictional trope. But to hear about this specific facility concretized everything for me, and stirred my imagination.

My whimsy about a clique of singularitarian Mormon computer scientists may be somewhat arbitrary. But consider this: who is more likely to create the first AGI - the Singularity Institute, or the National Security Agency? The answer to that can hardly be in doubt. The NSA's mission is to stay ahead of everyone else in matters like code-breaking and quantitative data analysis. They have to remain number one in theoretical computer science, and they have a budget of billions with which to accomplish that goal.

So if the future hinges on the value system of the first AI, then what goes on inside the NSA is far more important than what goes on at singinst.org. The Singularity Institute may have adopted a goal - design and create friendly AI - which, according to the Institute's own philosophy, means that they would determine the future of the human race; and some of the controversy about the Institute, its methods, personalities, etc, is coming about because of this. But if you accept the philosophy, then the NSA is surely the number-one candidate to actually decide the fate of the world. Outsiders will not get to decide what happens; the most we can reasonably hope to do is to make correct observations that might be noticed and taken into account by the people on the inside who will, for better or worse, make the fateful decisions.

Of course it is theoretically possible that Google, IBM, the FSB, Japan's biggest supercomputer... will instead be ground zero for the intelligence explosion. But I would think that the NSA is well ahead of all of them.

Comment author: Bongo 17 March 2012 06:16:33PM 3 points [-]

I almost downvoted this because when I clicked on it from my RSS reader, it appeared to have been posted on main LW instead of discussion (known bug). This might be the reason for a lot of mysterious downvoting, actually.

Comment author: Bongo 17 March 2012 02:28:47AM *  1 point [-]

(Bug report: I was sent to this post via this link, and I see MAIN bolded above the title instead of DISCUSSION. The URL is misleading too, shouldn't urls of discussion posts contain "/r/discussion/" instead of "/lw"?)

(EDIT: Grognor just told me that "every discussion post has a main-style URL that bolds MAIN")

Comment author: timtyler 15 March 2012 11:25:32AM *  15 points [-]

Perhaps consider adding the high fraction of revenue that ultimately goes to paying staff wages to the list.

Oh yes, and fact that the leader wants to SAVE THE WORLD.

Comment author: Bongo 15 March 2012 10:00:32PM *  5 points [-]

fraction of revenue that ultimately goes to paying staff wages

About a third in 2009, the last year for which we have handy data.

Comment author: Bongo 15 March 2012 02:15:18PM 1 point [-]

Snape says this in both MoR and the original book:

"I can teach you how to bottle fame, brew glory, even stopper death"

Isn't this silly? Of course you can stopper death, because duh, poisons exist.

It might be just a slip-up in the original book, but I'm hoping it will somehow make sense in MoR. My first thought was that maybe a magical death potion couldn't be stopped using magical healing, unlike non-magical poisons.

I asked this on IRC and got some interesting ideas. feep thought it might mean that you can make a Potion of Dementor, which would fit since dementors are avatars of death in MoR and stoppering death would be actually impressive if it meant that. Orionstein suggested it might be a potion made from eg. a bullet that's killed someone, which, given what we know of how potions work from chapter 78, might also result in a potion with deathy effects above and beyond just those of poison.

Comment author: Wei_Dai 21 February 2012 11:43:58PM 5 points [-]

I intended [...]

But some people seem to have read it and heard this instead [...]

When I write posts, I'd often be tempted to use examples from my own life, but then I'd think:

  1. Do I really just intend to use myself to illustrate some point of rationality, or do I subconsciously also want to raise my social status by pointing out my accomplishments?
  2. Regardless of what I "really intend", others will probably see those examples as boasting, and there's no excuse (e.g., I couldn't any better examples) I can make to prevent that.

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished. I'm not saying that you should do the same since you have different costs and benefits to consider (or I could well be wrong myself and shouldn't care so much about not being seen as boasting), but the fact that people interpret your posts filled with personal examples/accomplishments as being arrogant shouldn't have come as a surprise.

Another point I haven't seen brought up yet is that social conventions seem to allow organizations to be more boastful than individuals. You'd often see press releases or annual reports talking up an organization's own accomplishments, while an individual doing the same thing would be considered arrogant. So an idea to consider is that when you want to boast of some accomplishment, link it to the Institute and not to an individual.

Comment author: Bongo 22 February 2012 08:15:43PM 4 points [-]

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.

You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

In response to comment by [deleted] on Welcome to Less Wrong!
Comment author: XangLiu 19 December 2011 06:46:26PM 30 points [-]

The point has been missed. Deep breath, paper-machine.

Nearly any viewpoint is capable of and has done cruel things to others. No reason to unnecessarilly highlight this fact and dramatize the Party of Suffering. This was an intro thread by a newcomer - not a reason to point to you and "your" people. They can speak for themselves.

Comment author: Bongo 19 December 2011 06:57:48PM *  5 points [-]

I wonder how this comment got 7 upvotes in 9 minutes.

EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.

Comment author: knb 02 December 2011 09:00:46PM 5 points [-]

This is a bad idea. Attempting to create personal relationships will just accelerate LW's degeneration into a typical internet hugbox. People will start supporting or opposing ideas based on whether they are "e-friends".

Comment author: Bongo 07 December 2011 08:26:11AM 1 point [-]

This could be an option.

Comment author: gwern 04 December 2011 05:56:59PM *  5 points [-]

I was musing on the old joke about anti-Occamian priors or anti-induction: 'why are they sure it's a good idea? Well, it's never worked before.' Obviously this is a bad idea for our kind of universe, but what kind of universe does it work in?

Well, in what sort of universe would every failure of X to appear that time interval make X that much more likely? It sounds a bit vaguely like the hope function but actually sounds more like an urn of balls where you sample without replace: every ball you pull (and discard) without finding X makes you a little more confident that next time will be X. Well, what kind of universe sees its possibilities shrinking every time?

For some reason, entropy came to mind. Our universe moves from low to high entropy, and we use induction. If a universe moved the opposite direction from high to low entropy, would its minds use anti-induction? (Minds seem like they'd be possible, if odd; our minds require local lowering of entropy to operate in an environment of increasing entropy, so why not anti-minds which require local raising of entropy to operate in an environment of decreasing entropy - somewhat analogous to reversible computers expending energy to erase bits.)

I have no idea if this makes any sense. (To go back to the urn model, I was thinking of it as sort of a cellular automaton mental model where every turn the plane shrinks: if you are predicting a glider as opposed to a huge turing machine, as every turn passes and the plane shrinks, the less you would expect to see the turing machine survive and the more you would expect to see a glider show up. Or if we were messing with geometry, it'd be as if we were given a heap of polygons with thousands of sides where every second a side was removed, and predicted a triangle - as the seconds pass, we don't see any triangles, but Real Soon Now... Or to put it another way, as entropy decreases, necessarily fewer and fewer arrangements show up; particular patterns get jettisoned out as entropy shrinks, and so having observed a particular pattern, it's unlikely to sneak back in: if the whole universe freezes into one giant simple pattern, the anti-inductionist mind would be quite right to have expected all but one observations to not repeat. Unlike our universe, where there seem to be ever more arrangements as things settle into thermal noise: if a arrangement shows up we'll be seeing a lot of it around. Hence, we start with simple low entropy predictions and decreases confidence.)

Boxo suggested that anti-induction might be formalizable as the opposite of Solomonoff induction, but I couldn't see how that'd work: if it simply picks the opposite of a maximizing AIXI and minimizes its score, then it's the same thing but with an inverse utility function.

The other thing was putting a different probability distribution over programs, one that increases with length. But while you are forbidden uniform distributions over all the infinite integers, and you can have non-uniform decreasing distributions (like the speed prior or random exponentials), it's not at all obvious what a non-uniform increasing distribution looks like - apparently it doesn't work to say 'infinite-length programs have p=0.5, then infinity-1 have p=0.25, then infinity-2 have p=0.125... then programs of length 1/0 have p=0'.

Comment author: Bongo 04 December 2011 06:06:24PM *  4 points [-]

(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)

Comment author: JoshuaZ 01 December 2011 06:39:38PM *  10 points [-]

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that at a glance seem to make rough syntactic sense actually has semantics behind it. A lot of theology and the bad ends of philosophy have this problem. Even math has run into this issue. Until limits were defined rigorously in the mid 19th century there was disagreement over what the limit of 1 -1 + 1 -1 +1 -1 +1... was. Is it is 1 because one can group it as 1 + (-1 +1) + (-1+1)... or maybe it is zero since one can write it as (1-1) + (1-1) + (1-1)...? This did however lead to good math and other notions of limits including the entire area of what would later be called Tauberian theorems.

Comment author: Bongo 03 December 2011 08:03:35AM *  2 points [-]

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.

This sentence is so convoluted that at first I thought it was some kind of meta joke.

View more: Next