Posts

Sorted by New

Wiki Contributions

Comments

Canon is fairly clear that Hogwarts is the only game in Britain. It also leads to glaring inconsistencies in scale which you just pointed out. (Rowling originally said that Hogwarts had about 700 students, and then fans started pointing out that that was wildly inconsistent with the school as she described it. And even that's too small to make things really work).

But the evidence, from HP7 (page 210 of my first-run American hardback copy):

Lupin is talking about Voldemort's takeover of Wizarding society, to Harry and the others.

"Attendance is now compulsory for every young witch and wizard," he replied. "That was announced yesterday. It's a change, because it was never obligatory before. Of course, nearly every witch and wizard in Britain has been educated at Hogwarts, but their parents had the right to teach them at home or send them abroad if they preferred. This way, Voldemort will have the whole Wizarding population under his eye from a young age."

"Most wizards" in Britain were educated at Hogwarts, and the exceptions were homeschooled or sent abroad. It's really hard to read that to imply that there's another British wizarding school anywhere.

There's another big pile of gold, about 7,000 tonnes, in the New York Fed--that's actually where a lot of foreign countries keep a large fraction of their gold supply. It's open to tourists and you can walk in and look at the big stacks of gold bars. It does have fairly impressive security, but that security could plausibly be defeated by a reasonably competent wizard.

I believe this is a misreading; Winky was there, but the Dark Mark was cast by Barry Crouch Jr. From the climax of Book 4, towards the end of Chapter 35:

I wanted to attack them for their disloyalty to my master. My father had left the tent; he had gone to free the Muggles. Winky was afraid to see me so angry. She used her own brand of magic to bind me to her. She pulled me from the tent, pulled me into the forest, away from the Death Eaters. I tried to hold her back. I wanted to return to the campsite. I wanted to show those Death Eaters what loyalty to the Dark Lord meant, and to punish them for their lack of it. I used the stolen wand to cast the Dark Mark into the sky.

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same.

As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.

This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph).

I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real.

In a number of places Eliezer commented that he's not too worried about, say, two systems morality_1 and morality_2 that differ in the third decimal place. I think it's actually really interesting when they differ in the third decimal place; it's probably not important to the project of designing an AI but I don't find that project terribly interesting so that doesn't bother me.

But I'm also more willing to say to someone, ""We have nothing to argue about [on this subject], we are only different optimization processes." With most of my friends I really do have to say this, as far as I can tell, on at least one subject.

However, I really truly don't think this is as all-or-nothing as you or Eliezer seem to paint it. First, because while morality may be a compact algorithm relative to its output, it can still be pretty big, and disagreeing seriously about one component doesn't mean you don't agree about the other several hundred. (A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy; and as far as I can tell this is more or less irreducible in the specification for all of us). But I can still talk to these people and have rewarding conversations on other subjects.

Second, because I realize there are other means of persuasion than argument. You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe. (See Rorty's Contingency, Irony, and Solidarity for discussion of this).

Humans have a lot of psychological similarity. They also have some very interesting and deep psychological variation (see e.g. Haidt's work on the five moral systems). And it's actually useful to a lot of societies to have variation in moral systems--it's really useful to have some altruistic punishers, but not really for everyone to be an altruistic punisher.

But really, this is beside the point of the original question, whether Eliezer is really a meta-ethical relativist, because the limit of this sequence which he claims converges isn't what anyone else is talking about when they say "morality". Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes. Eliezer clearly doesn't believe any such thing exists. And he's right.

Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, and has more or less explicitly identified it as a failure mode of AI development. I doubt these are beliefs that really converge with more information and reflection.

Or in steven's formulation, I suspect that relatively few agents actually have Ws in common; his definition presupposes that there's a problem structure "implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments". I'm arguing that many agents have sufficiently different implicit problem structures that, for instance, by that definition Eliezer and Robin Hanson can't really make "should" statements to each other.

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right" and "wrong" do not stand subject to universal truth conditions at all." Eliezer doesn't like that because in each speaker's language, terms like "good" stand subject to universal truth conditions. But each speaker speaks a slightly different language where the truth conditions on the word represented by the string "good" stands subject to a slightly different set of universal truth conditions.

For an analogy: I apparently consistently define "blonde" differently from almost everyone I know. But it has an actual definition. When I call someone "blonde" I know what I mean, and people who know me well know what I mean. But it's a different thing from what almost everyone else means when they say "blonde." (I don't know why I can't fix this; I think my color perception is kinda screwed up). An MER guy would say that whether someone is "blonde" isn't objectively true or false because what it means varies from speaker to speaker. Eliezer would say that "blonde" has a meaning in my language and a different meaning in my friends' language, but in either language whether a person is "blonde" is in fact an objective fact.

And, you know, he's right. But we're not very good at discussing phenomena where two different people speak the same language except one or two words have different meanings; it's actually a thing that's hard to talk about. So in practice, "'good' doesn't have an objective definition" conveys my meaning more accurately to the average listener than "'good' has one objective meaning in my language and a different objective meaning in your language."

I was a grad student at Churchill, and we mostly ignored such things, but my girlfriend was an undergrad and felt compelled to educate me. I recall Johns being the rich kids, Peterhouse was the gay men (not sure if that's for an actual reason or just the obvious pun), and a couple others that I can't remember off the top of my head.

It's mentioned, just not dwelled on. It's mentioned once in passing in each of the first two books:

Sorceror's Stone:

They piled so much homework on them that the Easter holidays weren't nearly as much fun as the Christmas ones.

Chamber of Secrets:

The second years were given something new to think about during the Easter holidays.

And so on. It's just that I don't think anything interesting ever happens during them.

Load More