Harry doesn't exactly strike me as psychologically prepared for this particular revelation.
He's quite prepared in a Hero's Journey sense, though. In Harry's own mind, he has lost his mentor. Thus, he is now free to be a mentor. And what better way to grow, as a Hero and über-rationalist, than to teach others to do what you do?
Of course, Harry would say that he's already doing that with Draco—but in the same way that he usually holds back his near-mode instrumental-rationalist dark side, he's holding back the kind of insights that Draco would need to think the way Harry thinks; Harry is training Draco to be a scientist, but not an instrumental rationalist, and therefore, in the context of the story, not a Hero. (To put it another way: Draco will never one-box. He's a virtue-ethicist who is more concerned with "rationality" as just another virtue than with winning per se.)
Mentoring Hermione would be an entirely different matter: he would basically have to instill a dark side into her. Quirrel taught Harry how to lose—Harry would have to teach Hermione how to win.
If Eliezer has planned MoR as a five-act heroic fantasy, it will probably go like this; usually, in a five-act form, acts 4 and 5 mirror the character developments of the Hero in 2 and 3 in another character, for the purposes of re-examining the (developed, and now mostly stagnant) Hero's growth and revealing by juxtaposition what using that particular character as Hero brought to the journey.
It seems more likely to be a three-act form at this point, though, with Azkaban as the central, act 2 ordeal. That's not to say the story is more than half-over already, though; Harry has just found his motivation for acting instead of reacting (to change the magical world such that Azkaban is no longer a part of it.)
Very small children understand "real" to be "what's inside" -- what's hidden, essential. Sometimes literally inside: ask toddlers "If you took a dog, and gave it the bones and insides of a cat, would it still be a dog?" they say "no," but "If you took a dog and made it look like a cat on the outside, would it still be a dog?" they say "yes." (I'm getting this from Paul Bloom's "How Pleasure Works.") Young children are essentialist about gender as well -- they assume more differences between the sexes than actually exist, not fewer.
What psychological evidence I've seen suggests that we're in some way wired to see categories as real. "Natural kinds." To think that there's a real difference "out there" between dog and not-dog, not just a useful bookkeeping convention. I'm inclined to believe that Anna's reasoning about "atoms are real" and Eliezer's reasoning about categories actually make more sense than essentialism -- but I suspect that this kind of question-dissolving is not the standard, evolution-provided brain pathway.
Could you give me an example of something that is real?
Whatever substrate supports the computation inscribing your consciousness would be necessarily real, under whatever sense the word "real" could possibly have useful meaning. ("I think; thinking is an algorithm; therefore something is, in order to execute that algorithm.")
Interestingly, proposing a Tegmark multiverse makes the deepest substrate of consciousness "mathematics."
We're built to play games. Until we hit the formal operational stage (at puberty), we basically have a bunch of individual, contextual constraint solvers operating mostly independently in our minds, one for each "game" we understand how to play—these can be real games, or things like status interactions or hunting. Basically, each one is a separately-trained decision-theoretical agent.
The formal operational psychological stage signals a shift where these agents become unified under a single, more general constraint-solving mechanism. We begin to see the meta-rules that apply across all games: things like mathematical laws, logical principles, etc. This generalized solver is expensive to build, and expensive to run (minds are almost never inside it if they can help it, rather staying inside the constraint-solving modes relevant to particular games), but rewards use, as anyone here can attest.
When we are operating using this general solver, and we process an assertion that would suggest that we must restructure the general solver itself, we react in two ways:
Initially, we dread the idea. This is a shade of the same feeling you'd get if your significant other said, very much out of the blue and in very much the sort of tone associated with such things, "we need to talk." Your brain is negatively reinforcing, all at once, all the pathways that led you here, way back as far as it remembers the causal chain proceeding. Your mind reels, thinking "oh crap, I should have studied [1 day ago], I shouldn't have gone out partying [1 week ago], I should have asked friends to form a study group [at the beginning of the semester], I never should have come to this school in the first place... why did I choose this damn major?"
Second, we alienate ourselves from the source of the assertion. We don't want to restructure; not only is it expensive, but our general solver was created as a product of the purified intersection of all experiments that led to success in all played games. That is to say, it is, without exception, the set of the most well-trusted algorithms and highly-useful abstractions in your brain. It's basically read-only. So, like an animal lashing out when something tries to touch its wounds, our minds lash out to stop the assertion from pressing too hard against something that would be both expensive and fruitless to re-evaluate. We turn down the level of identification/trust we have with whoever or whatever made the assertion, until they no longer need to be taken seriously. Serious breaches can cause us to think of the speaker as having a completely alien mental process—this is what some people say of the experience of speaking with sociopathic serial killers, for example.
Of course, the mind can only implement the second "barrier" step when the assertion is associated with something that can vary on trust, like a person or a TV program. If it comes directly as evidence from the environment, only the first reaction remains, and intensifies increasingly as you internalize the idea that you may just have to sit down and throw out your mind.
Gender/sexuality. People really want essences here. Some people are still stuck on "male essence / female essence", some manage to get as far as "attracted to male essence / female essence". Reductionism could (dis)solve this issue.
I would say that it is not that we want essences in our sexuality, but that gender and sexuality are essentialist by nature: the sexual drive is built on top of the parts of our brains that essentialize/abstract/encapsulate, and so reducing the concept would involve modifying the human utility function to desire the parts, rather than the pretended whole.
Or, to put it another way: a heterosexual blegg is not 50% attracted to something with 50% blegg features and 50% rube features; it is attracted only to pure rubes, and the closer something is to being a rube, without exactly being a rube, the less attractive it is. This is basically the Uncanny Valley at work: some of our drives want discrete gestalts, and the harder they have to work to construct them, the less favorably they'll evaluate the things they're constructing on.
#1 is kind of clever pointing out a spelling error.
You know the thing that horrified me? When I realized that my "wizened" snark was my most upvoted contribution to this site. All I did was point out the intersection of a typo and an amusing mental image!
You're totally right, though, that I should have found a politer way to do it- focus on the mental image instead of status-seeking sarcasm. Indeed, that's probably the heart of politeness- wording things in ways that they don't threaten the other person's status.
It's pretty common, though. You wanted the other people reading to think of you as clever, and considered that to be "worth" making the author feel a bit bad. This is what the proxy-value of karma, as implemented by the Reddit-codebase discussion engine of this site, reflects: the author can only downvote once (and even then they are discouraged from doing so, unlike with, say, a Whuffie system), but the audience can upvote numerous times.
Thinking back, I've had many discussions on the Internet that devolved into arguments, where, although my interlocutor was trying to convince me of something, I had given up on convincing them of anything in particular, and was instead trying to convince any third-parties reading the post that the other person was not to be trusted, and that their advice was dangerous—at the expense of making myself seem like even less trustworthy to the person I was nominally supposed to be convincing. This is what public fora do.
Being smart seems to make you unpopular.
I've been told by my Korean college students that in Korean high schools the students with the highest grades are usually the most popular.
Koreans have an extremely strong aversion to correcting the errors of others to such an extent that a Korean airline crashed because the co-pilot who knew that his Captain had made an error which if uncorrected would cause the plane to crash didn't do more then suggest to the Captain that he had made an error. (Source: Outliers: The Story of Success by Malcolm Gladwell).
The errors of others, or the errors of those of superior social ranking? Do Korean teachers refrain from correcting students?
The Kleptomaniac Hero is very common in video games. If the game lets you take it, you probably should - and you can take a lot of stuff from random people's houses and such, while the people who actually own it stand there doing nothing.
This is an example of Conservation of Detail, which is just another way to say that the contrapositive of your statement is true: if you don't need to take something in a game, then the designer won't have bothered to make it take-able (or even to include it.)
I always assume that there's all sorts of stuff lying around in an RPG house that you can't see, because your viewpoint character doesn't bother to take notice of it. It might just be because it's irrelevant, but it might also be for ethical reasons: your viewpoint character only "reports" things to you that his system of belief allows him to act upon.
Simple solution: Build an FAI to optimize the universe to your own utility function instead of humanity's average utility function. They will be nearly the same thing anyway (remember, you were tempted to have the FAI use the average human utility function instead, so clearly, you sincerely care about other people's wishes). And in weird situations in which the two are radically different (like this one), your own utility function more closely tracks the intended purpose of an FAI.
This seems to track with the Eliezer's fictional "conspiracies of knowledge": if we don't want our politicians to get their hands on our nuclear weapons (or the theory for their operation), then why should they be allowed a say in what our FAI thinks?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I hope it plays out like this, at least in part. The bits early in the book with Harry teaching Draco were fun.
Draco may have already had the instrumental rationality part; certainly he was on a higher level instrumentally than epistemically. He had already had tutors in influencing people, he didn't have an akrasia problem, and he grew up in a culture of "find out what you want and go get it". Also, did you mean "Draco will never one-box?"
Er, yes, edited.