Comment author: gattsuru 04 March 2015 04:34:39PM 30 points [-]

Is that what we've seen presented so far?

Dumbledore won during the Battle of the Three Armies. His assault on Azkaban would have gotten him killed (and more seriously, set back his efforts by years) for a stupid communication error, were Harry not willing to risk his own life and invent new magic to save the man. Hermoine outlasted several hours of the Defense Professor's most aggressive psychological attacks possible, using fairly basic deontology. His 'lesson plan' with Ma-Ha-Su in Chapter 16 was bluntly stupid, even if Harry hadn't used the easy way out. In Chapter 35, he fears that Harry has screwed over his plans because of voicing an obvious disagreement that Harry has repeatedly given privately before.

And that's before we get to the stupidity that was enforced by canon : testing multiple novel spells (Horcruxes, however he 'reformated' the young Harry Potter) without sufficient and verified safeties, the highly fractious Death Eaters, the lackluster war with Dumbledore.

Quirrellmort is smart. He thinks ahead. But his fundamental philosophy is still very restricted. As much as he tries to claim otherwise, he's running on distilled Command Push -- we'll note that no Death Eater gave him advice in this chapter, nor would we expect them to. His speech in Chapter 34 follows the same philosophy.

But more importantly, he underestimates risks. He's a partially-formed rationalist, who has heard of Kolmogorov complexity but can't quite understand why he should shut-up-and-multiply yet. He leaves Harry a wand because wanded Harry is only a threat because of that wand if he has a) wordless, b) motionless, c) wanded, d) magic that can instantly disable Death Eaters, e) can hit him at all and f) threatens an immortal. It's understandable to not think Harry is a risk. A full-grown wizard in the same environment wouldn't be a risk -- Dumbledore or Mad-Eye Moody would have died, and died quickly. That's not as unreasonable a mistake as you'd expect.

Comment author: Eliezer_Yudkowsky 04 March 2015 05:41:19PM 12 points [-]

THANK YOU.

Comment author: ChristianKl 26 February 2015 09:24:24AM -1 points [-]

And unfortunately, in a serial you can't go back and change these things if you realize that you needed to foreshadow them a few chapters back.

I don't think that's how Eliezer treats it. The reference to the centaur forecasting that giving Petunia the beauty portion will end the world, that's in the first chapter wasn't there at the start but was added later.

Comment author: Eliezer_Yudkowsky 26 February 2015 11:50:58PM 12 points [-]

It was there on day 1.

Comment author: TobyBartels 24 February 2015 10:39:16PM 5 points [-]

Well, you kept it out for a long time.

Comment author: Eliezer_Yudkowsky 26 February 2015 11:43:09PM 1 point [-]

A ShoutOut is not the same as contaminating the plot.

Comment author: solipsist 30 July 2014 11:11:58AM 9 points [-]

We have a bet. It's on!

Comment author: Eliezer_Yudkowsky 24 February 2015 04:33:32PM 6 points [-]

By request, I declare solipsist to have lost this bet.

Comment author: pedanterrific 27 September 2011 08:20:40PM 50 points [-]

Random, low-confidence but possibly amusing prediction: in MoR the final obstacle of the third-floor corridor is called the Mirror of Vec, because it's inscribed Noiti lovde talopart xet nere hocru oyt ube cafru oyt on wohsi.

It's much more thematic, at least.

Comment author: Eliezer_Yudkowsky 23 February 2015 09:02:28PM 31 points [-]

Great idea! I should do that.

Comment author: somervta 30 January 2015 11:48:48PM 2 points [-]

I think you overestimate the likelihood that EY even read your comment. I doubt he reads all comments on hpmor discussion anymore.

Comment author: Eliezer_Yudkowsky 02 February 2015 07:39:49PM 12 points [-]

cough

Comment author: Eliezer_Yudkowsky 08 January 2015 11:22:51PM 61 points [-]

For what it’s worth, I endorse this aesthetic and apologize for any role I played in causing people to focus too much on the hero thing. You need a lot of nonheroes per hero and I really want to validate the nonheroes but I guess I feel like I don’t know how, or like it’s not my place to say because I didn’t make the same sacrifices… or what feels to me like it ought to be a sacrifice, only maybe it’s not.

Comment author: MuonManLaserJab 30 December 2014 02:56:13AM *  1 point [-]

Wouldn't this require one Quirrell to agree to sacrifice a part of his power before any other Quirrell does? (Assuming that all of the vow rituals taking place at the same time would require each Quirrell to take part in more than one ritual simultaneously, which doesn't seem possible.) It seems to me that a Quirrell wouldn't agree to this.

Comment author: Eliezer_Yudkowsky 30 December 2014 04:35:06AM 4 points [-]

You don't have to sacrifice your own power for that, the bonder sacrifices power. And the Unbreakable Vow could be worded to only come into force once all Vows were taken.

Comment author: jessicat 11 December 2014 10:37:03PM *  38 points [-]

Transcript:

Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?

Thiel: I'm super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it'll take people's jobs, it'll replace people's jobs, but I think it's much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn't be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it's unlikely to happen any time soon, so I don't worry about it as much, but it's one of these tail risk things, and it's probably the one area of technology that I think would be worrisome, because I don't think we have a clue as to how to make it friendly or not.

Comment author: Eliezer_Yudkowsky 14 December 2014 07:00:33PM 7 points [-]

Context: Elon Musk thinks there's an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I'm also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).

Comment author: Viliam_Bur 08 December 2014 09:31:57PM 4 points [-]

As a moderator, when you look at someone else's comment, there should be an additional option between "Permalink" and "Get notifications" buttons. (Parent, Reply, Permalink, Ban, Notifications.) If you click it, it will change to "Unban".

Comment author: Eliezer_Yudkowsky 09 December 2014 10:02:56PM 12 points [-]

Found the correct control. For mods, the link is:

And Azathoth123 is out. It's not very good, but it's the best I can do - I encourage everyone to help Viliam make the software support better.

View more: Prev | Next