Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Velorien 16 August 2013 02:07:52PM 14 points [-]

Hogwarts is the entire British magical education system (with the exception of some private tutors).

Do we know this for a fact?

Objections:

  • Going to Hogwarts is prestigious, meaning there must be lower-status options available.

  • Hogwarts regularly hires apparently British replacement teachers, most of them with at least the appearance of educational experience. It is improbable that said experience comes exclusively from abroad or from being a private tutor.

  • There are too few pupils at Hogwarts to account for the entire underage wizarding population, given the size of the overall wizarding population and assuming the majority of wizards' children are also wizards (not to mention having to factor in Muggleborns).

  • It seems improbable that the booming school equipment business of Diagon Alley survives on one school's worth of customers, especially if most of them only shop once a year.

  • If most of the population of magical Britain have been through the same school, we would expect an extremely high degree of social interconnectedness, with most people knowing everyone of the same age at least by sight. There's no evidence of this.

On the other hand,

  • It is implied that letters coming on one's 11th birthday can only come from Hogwarts.

  • If one is expelled from Hogwarts, one is forbidden from practising further magic altogether.

  • No other British schools, or pupils or graduates thereof, are ever mentioned in canon that I can remember.

Comment author: Jadagul 16 August 2013 10:40:12PM 13 points [-]

Canon is fairly clear that Hogwarts is the only game in Britain. It also leads to glaring inconsistencies in scale which you just pointed out. (Rowling originally said that Hogwarts had about 700 students, and then fans started pointing out that that was wildly inconsistent with the school as she described it. And even that's too small to make things really work).

But the evidence, from HP7 (page 210 of my first-run American hardback copy):

Lupin is talking about Voldemort's takeover of Wizarding society, to Harry and the others.

"Attendance is now compulsory for every young witch and wizard," he replied. "That was announced yesterday. It's a change, because it was never obligatory before. Of course, nearly every witch and wizard in Britain has been educated at Hogwarts, but their parents had the right to teach them at home or send them abroad if they preferred. This way, Voldemort will have the whole Wizarding population under his eye from a young age."

"Most wizards" in Britain were educated at Hogwarts, and the exceptions were homeschooled or sent abroad. It's really hard to read that to imply that there's another British wizarding school anywhere.

Comment author: DanArmak 16 August 2013 08:34:28AM *  3 points [-]

That's a good point. I also add that Wikipedia says that:

A total of 174,100 tonnes of gold have been mined in human history, according to GFMS as of 2012.

But still, if just a few wizards stole appreciable fractions of the Muggle gold vaults, they would be individually very rich. The same 1000 tons of gold would be a (ETA fixed calculation) 200 million Galleon fortune if owned by one wizard. Therefore, the question is how much gold is concentrated in one place (already mined) and available for stealing.

Wikipedia provides a list of officially reported gold holdings by country. The top few are: US, 8133 tons; Germany, 3391 tons; IMF, 2814 tons; Italy, 2451 tons; France, 2435 tons.

But where is the gold physically kept? Well, Wikipedia says that Fort Knox holds 4578 tons of gold. In any case, a wizard could Apparate to people, ask them where most of the gold is (Legilimency/Veritaserum/Imperius), Memory-Charm to erase the few minutes of the encounter, and Apparate away. If the person doesn't know where the gold is, they can tell you who does know. Start with someone like a bank CEO, unlikely to have magical protection (unlike heads of state), and work your way on - in a day or two you'll find the gold.

How do we know this hasn't actually happened? The gold in the bank vaults may not be actually there. But the wizarding economy doesn't have a known history of occasional sudden billionaires. Lucius probably never even heard about fortunes of more than a few million Galleons.

Comment author: Jadagul 16 August 2013 09:28:03AM *  3 points [-]

There's another big pile of gold, about 7,000 tonnes, in the New York Fed--that's actually where a lot of foreign countries keep a large fraction of their gold supply. It's open to tourists and you can walk in and look at the big stacks of gold bars. It does have fairly impressive security, but that security could plausibly be defeated by a reasonably competent wizard.

Comment author: JTHM 15 August 2013 01:17:39PM 0 points [-]

Canon contradicts you: In book four, the house-elf Winky was able to conjure the dark mark with the use of a wand despite presumably never having wielded one before.

Comment author: Jadagul 15 August 2013 02:03:16PM 11 points [-]

I believe this is a misreading; Winky was there, but the Dark Mark was cast by Barry Crouch Jr. From the climax of Book 4, towards the end of Chapter 35:

I wanted to attack them for their disloyalty to my master. My father had left the tent; he had gone to free the Muggles. Winky was afraid to see me so angry. She used her own brand of magic to bind me to her. She pulled me from the tent, pulled me into the forest, away from the Death Eaters. I tried to hold her back. I wanted to return to the campsite. I wanted to show those Death Eaters what loyalty to the Dark Lord meant, and to punish them for their lack of it. I used the stolen wand to cast the Dark Mark into the sky.

Comment author: hairyfigment 24 April 2012 05:35:21PM -1 points [-]

In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

Do you know anyone who never makes computational errors? If 'mistakes' happen at all, we would expect to see them in cases involving tribal loyalties. See von Neumann and those who trusted him on hidden variables.

Comment author: Jadagul 25 April 2012 02:37:40AM 0 points [-]

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.

Comment author: [deleted] 22 April 2012 10:14:42PM *  2 points [-]

"We have nothing to argue about [on this subject], we are only different optimization processes."

Calling something a terminal value is the default behavior when humans look for a justification and don't find anything. This happens because we perceive little of our own mental processes and in the absence of that information we form post-hoc rationalizations. In short, we know very little about our own values. But that lack of retrieved / constructed justification doesn't mean it's impossible to unpack moral intuitions into algorithms so that we can more fully debate which factors we recognize and find relevant.

A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy

Your friends can understand why humans have positive personality descriptors for people who don't get angry in various situations: descriptors like reflective, charming, polite, solemn, respecting, humble, tranquil, agreeable, open-minded, approachable, cooperative, curious, hospitable, sensitive, sympathetic, trusting, merciful, gracious.

You can understand why we have positive personality descriptors for people who get angry in various situations: descriptors like impartial, loyal, decent, passionate, courageous, boldness, leadership, strength, resilience, candor, vigilance, independence, reputation, and dignity.

Both you and your friends can see how either group could pattern match their behavioral bias as being friendly, supportive, mature, disciplined, or prudent.

These are not deep variations, they are relative strengths of reliance on the exact same intuitions.

You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe.

Stories strengthen our associations of different emotions in response to analogous situations, which doesn't have much of a converging effect (Edit: unless, you know, it's something like the bible that a billion people read. That certainly pushes humanity in some direction), but they can also create associations to moral evaluative machinery that previously wasn't doing its job. There's nothing arational about this: neurons firing in the inferior frontal gyrus are evidence relevant to a certain useful categorizing inference, "things which are sentient".

Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes

I'm not in a mood to argue definitions, but "optimization process" is a very new concept, so I'd lean toward "less".

In response to comment by [deleted] on Stupid Questions Open Thread Round 2
Comment author: Jadagul 22 April 2012 11:22:26PM 1 point [-]

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same.

As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.

Comment author: [deleted] 22 April 2012 01:23:32AM *  3 points [-]

Just getting citations out of the way, Eliezer talked about the repugnant conclusion here and here. He argues for shared W in Psychological Unity and Moral Disagreement. Kaj Sotala wrote a notable reply to Psychological Unity, Psychological Diversity. Finally Coherent Extrapolated Volition is all about finding a way to unfold present-explicit-moralities into that shared-should that he believes in, so I'd expect to see some arguments there.

Now, doesn't the state of the world today suggest that human explicit-moralities are close enough that we can live together in a Hubble volume without too many wars, without a thousand broken coalitions of support over sides of irreconcilable differences, without blowing ourselves up because the universe would be better with no life than with the evil monsters in that tribe on the other side of the river?

Human concepts are similar enough that we can talk to each other. Human aesthetics are similar enough that there's a billion dollar video game industry. Human emotions are similar enough that Macbeth is still being produced three hundred years later on the other side of the globe. We have the same anatomical and functional regions in our brains. Parents everywhere use baby talk. On all six populated continents there are countries in which more than half of the population identifies with the Christian religions.

For all those similarities, is humanity really going to be split over the Repugnant Conclusion? Even if the Repugnant Conclusion is more of a challenge than muscling past a few inductive biases (scope insensitivity and the attribute substitution heuristic are also universal), I think we have some decent prospect for a future in which you don't have to kill me. Whatever will help us to get to that future, that's what I'm looking for when I say "right". No matter how small our shared values are once we've felt the weight of relevant moral arguments, that's what we need to find.

In response to comment by [deleted] on Stupid Questions Open Thread Round 2
Comment author: Jadagul 22 April 2012 06:19:17AM 1 point [-]

This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph).

I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real.

In a number of places Eliezer commented that he's not too worried about, say, two systems morality1 and morality2 that differ in the third decimal place. I think it's actually really interesting when they differ in the third decimal place; it's probably not important to the project of designing an AI but I don't find that project terribly interesting so that doesn't bother me.

But I'm also more willing to say to someone, ""We have nothing to argue about [on this subject], we are only different optimization processes." With most of my friends I really do have to say this, as far as I can tell, on at least one subject.

However, I really truly don't think this is as all-or-nothing as you or Eliezer seem to paint it. First, because while morality may be a compact algorithm relative to its output, it can still be pretty big, and disagreeing seriously about one component doesn't mean you don't agree about the other several hundred. (A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy; and as far as I can tell this is more or less irreducible in the specification for all of us). But I can still talk to these people and have rewarding conversations on other subjects.

Second, because I realize there are other means of persuasion than argument. You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe. (See Rorty's Contingency, Irony, and Solidarity for discussion of this).

Humans have a lot of psychological similarity. They also have some very interesting and deep psychological variation (see e.g. Haidt's work on the five moral systems). And it's actually useful to a lot of societies to have variation in moral systems--it's really useful to have some altruistic punishers, but not really for everyone to be an altruistic punisher.

But really, this is beside the point of the original question, whether Eliezer is really a meta-ethical relativist, because the limit of this sequence which he claims converges isn't what anyone else is talking about when they say "morality". Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes. Eliezer clearly doesn't believe any such thing exists. And he's right.

Comment author: [deleted] 21 April 2012 03:46:24PM *  2 points [-]

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meaning

In http://lesswrong.com/lw/t0/abstracted_idealized_dynamics/mgr, user steven wrote "When X (an agent) judges that Y (another agent) should Z (take some action, make some decision), X is judging that Z is the solution to the problem W (perhaps increasing a world's measure under some optimization criterion), where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it's shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral." with which Eliezer agreed.

This means that, even though people might presently have different things in mind when they say something is "good", Eliezer does not regard their/our/his present ideas as either the meaning of their-form-of-good or his-form-of-good. The meaning of good is not "the things someone/anyone personally, presently finds morally compelling", but something like "the fixed facts that are found but not defined by clarifying the result of applying the shared human evaluative cognitive machinery to a wide variety of situations under reflectively ideal conditions of information." That is to say, Eliezer thinks, not only that moral questions are well defined, "objective", in a realist or cognitivist way, but that our present explicit-moralities all have a single, fixed, external referent which is constructively revealed via the moral computations that weigh our many criteria.

I haven't finished reading CEV, but here's a quote from Levels of Organization that seems relevant: "The target matter of Artificial Intelligence is not the surface variation that makes one human slightly smarter than another human, but rather the vast store of complexity that separates a human from an amoeba". Similarly, the target matter of inferences that figure out the content of morality is not the surface variation of moral intuitions and beliefs under partial information which result in moral disagreements, but the vast store of neural complexity that allows humans to disagree at all, rather than merely be asking different questions.

So the meaning of presently-acted-upon-and-explicitly-stated-rightness in your language, and the meaning of it in my language might be different, but one of the many points of the meta-ethics sequence is that the expanded-enlightened-mature-unfolding of those present usages gives us a single, shared, expanded-meaning in both our languages.

If you still think that moral relativism is a good way to convey that in daily language, fine. It seems the most charitable way in which he could be interpreted as a relativist is if "good" is always in quotes, to denote the present meaning a person attaches to the word. He is a "moral" relativist, and a moral realist/cognitivist/constructivist.

In response to comment by [deleted] on Stupid Questions Open Thread Round 2
Comment author: Jadagul 21 April 2012 07:27:48PM 1 point [-]

Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, and has more or less explicitly identified it as a failure mode of AI development. I doubt these are beliefs that really converge with more information and reflection.

Or in steven's formulation, I suspect that relatively few agents actually have Ws in common; his definition presupposes that there's a problem structure "implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments". I'm arguing that many agents have sufficiently different implicit problem structures that, for instance, by that definition Eliezer and Robin Hanson can't really make "should" statements to each other.

Comment author: [deleted] 21 April 2012 04:01:15AM *  3 points [-]

"I am not a moral relativist." http://lesswrong.com/lw/t9/no_license_to_be_human/

"I am not a meta-ethical relativist" http://lesswrong.com/lw/t3/the_bedrock_of_morality_arbitrary/mj4

"what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain." http://lesswrong.com/lw/sm/the_meaning_of_right/

In response to comment by [deleted] on Stupid Questions Open Thread Round 2
Comment author: Jadagul 21 April 2012 09:43:39AM 9 points [-]

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right" and "wrong" do not stand subject to universal truth conditions at all." Eliezer doesn't like that because in each speaker's language, terms like "good" stand subject to universal truth conditions. But each speaker speaks a slightly different language where the truth conditions on the word represented by the string "good" stands subject to a slightly different set of universal truth conditions.

For an analogy: I apparently consistently define "blonde" differently from almost everyone I know. But it has an actual definition. When I call someone "blonde" I know what I mean, and people who know me well know what I mean. But it's a different thing from what almost everyone else means when they say "blonde." (I don't know why I can't fix this; I think my color perception is kinda screwed up). An MER guy would say that whether someone is "blonde" isn't objectively true or false because what it means varies from speaker to speaker. Eliezer would say that "blonde" has a meaning in my language and a different meaning in my friends' language, but in either language whether a person is "blonde" is in fact an objective fact.

And, you know, he's right. But we're not very good at discussing phenomena where two different people speak the same language except one or two words have different meanings; it's actually a thing that's hard to talk about. So in practice, "'good' doesn't have an objective definition" conveys my meaning more accurately to the average listener than "'good' has one objective meaning in my language and a different objective meaning in your language."

Comment author: Oscar_Cunningham 25 September 2011 10:29:53PM 0 points [-]

I've done one year at Trinity as an undergraduate, and I've already heard many anti-St Johns references.

Comment author: Jadagul 26 September 2011 10:59:45AM 1 point [-]

I was a grad student at Churchill, and we mostly ignored such things, but my girlfriend was an undergrad and felt compelled to educate me. I recall Johns being the rich kids, Peterhouse was the gay men (not sure if that's for an actual reason or just the obvious pun), and a couple others that I can't remember off the top of my head.

Comment author: Xachariah 18 September 2011 04:00:38AM *  0 points [-]

In canon, there was no Easter/Spring break mentioned, merely Christmas and Summer. It's a shame, because the interaction between Harry and his parents while his parents visit Hogwarts would be amazing. I can't wait for summer vacation to come.

Edit: Apparently I stand corrected. Good to know.

Comment author: Jadagul 18 September 2011 09:00:48PM 9 points [-]

It's mentioned, just not dwelled on. It's mentioned once in passing in each of the first two books:

Sorceror's Stone:

They piled so much homework on them that the Easter holidays weren't nearly as much fun as the Christmas ones.

Chamber of Secrets:

The second years were given something new to think about during the Easter holidays.

And so on. It's just that I don't think anything interesting ever happens during them.

View more: Next