All of Jadagul's Comments + Replies

Jadagul180

Canon is fairly clear that Hogwarts is the only game in Britain. It also leads to glaring inconsistencies in scale which you just pointed out. (Rowling originally said that Hogwarts had about 700 students, and then fans started pointing out that that was wildly inconsistent with the school as she described it. And even that's too small to make things really work).

But the evidence, from HP7 (page 210 of my first-run American hardback copy):

Lupin is talking about Voldemort's takeover of Wizarding society, to Harry and the others.

"Attendance is n

... (read more)
Jadagul40

There's another big pile of gold, about 7,000 tonnes, in the New York Fed--that's actually where a lot of foreign countries keep a large fraction of their gold supply. It's open to tourists and you can walk in and look at the big stacks of gold bars. It does have fairly impressive security, but that security could plausibly be defeated by a reasonably competent wizard.

DanArmak150

More to the point, whatever security Muggle vaults had 100 or 200 years ago definitely wouldn't have stood up to wizards. (Their powers wane by the year, while ours wax.) Since all the Muggle gold didn't vanish long ago, there must be a different explanation than Muggle vault security.

Jadagul160

I believe this is a misreading; Winky was there, but the Dark Mark was cast by Barry Crouch Jr. From the climax of Book 4, towards the end of Chapter 35:

I wanted to attack them for their disloyalty to my master. My father had left the tent; he had gone to free the Muggles. Winky was afraid to see me so angry. She used her own brand of magic to bind me to her. She pulled me from the tent, pulled me into the forest, away from the Death Eaters. I tried to hold her back. I wanted to return to the campsite. I wanted to show those Death Eaters what loyalty to the Dark Lord meant, and to punish them for their lack of it. I used the stolen wand to cast the Dark Mark into the sky.

1TobyBartels
This still shows us that people found it plausible that Winky cast a spell using a wand. (Of course, these were far from disinterested people, plus people are stupider in canon.)
3JTHM
You are entirely correct. I mis-remembered the events of book four.
Jadagul00

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.

Jadagul20

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible th... (read more)

0hairyfigment
Do you know anyone who never makes computational errors? If 'mistakes' happen at all, we would expect to see them in cases involving tribal loyalties. See von Neumann and those who trusted him on hidden variables.
5endoself
He does intend to convey something real and nontrivial (well, some people might find it trivial, but enough people don't that it is important to be explicit) by saying that he is not a meta-ethical realist. The basic idea is that, while his brain is the causal reason for him wanting to do certain things, it is not referenced in the abstract computation that defines what is right. To use a metaphor from the meta-ethics sequence, it is a fact about a calculator that it is computing 1234 * 5678, but the fact that 1234 * 5678 = 7 006 652 is not a fact about that calculator. This distinguishes him from some types of relativism, which I would guess to be the most common types. I am unsure whether people understand that he is trying to draw this distinction and still think that it is misleading to say that he is not a moral relativist or whether people are confused/have a different explanation for why he does not identify as a relativist.
Jadagul10

This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph).

I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real.

In a number of places Eliezer commented that he's not to... (read more)

3[anonymous]
Calling something a terminal value is the default behavior when humans look for a justification and don't find anything. This happens because we perceive little of our own mental processes and in the absence of that information we form post-hoc rationalizations. In short, we know very little about our own values. But that lack of retrieved / constructed justification doesn't mean it's impossible to unpack moral intuitions into algorithms so that we can more fully debate which factors we recognize and find relevant. Your friends can understand why humans have positive personality descriptors for people who don't get angry in various situations: descriptors like reflective, charming, polite, solemn, respecting, humble, tranquil, agreeable, open-minded, approachable, cooperative, curious, hospitable, sensitive, sympathetic, trusting, merciful, gracious. You can understand why we have positive personality descriptors for people who get angry in various situations: descriptors like impartial, loyal, decent, passionate, courageous, boldness, leadership, strength, resilience, candor, vigilance, independence, reputation, and dignity. Both you and your friends can see how either group could pattern match their behavioral bias as being friendly, supportive, mature, disciplined, or prudent. These are not deep variations, they are relative strengths of reliance on the exact same intuitions. Stories strengthen our associations of different emotions in response to analogous situations, which doesn't have much of a converging effect (Edit: unless, you know, it's something like the bible that a billion people read. That certainly pushes humanity in some direction), but they can also create associations to moral evaluative machinery that previously wasn't doing its job. There's nothing arational about this: neurons firing in the inferior frontal gyrus are evidence relevant to a certain useful categorizing inference, "things which are sentient". I'm not in a mood to argue defin
Jadagul10

Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, ... (read more)

4[anonymous]
Just getting citations out of the way, Eliezer talked about the repugnant conclusion here and here. He argues for shared W in Psychological Unity and Moral Disagreement. Kaj Sotala wrote a notable reply to Psychological Unity, Psychological Diversity. Finally Coherent Extrapolated Volition is all about finding a way to unfold present-explicit-moralities into that shared-should that he believes in, so I'd expect to see some arguments there. Now, doesn't the state of the world today suggest that human explicit-moralities are close enough that we can live together in a Hubble volume without too many wars, without a thousand broken coalitions of support over sides of irreconcilable differences, without blowing ourselves up because the universe would be better with no life than with the evil monsters in that tribe on the other side of the river? Human concepts are similar enough that we can talk to each other. Human aesthetics are similar enough that there's a billion dollar video game industry. Human emotions are similar enough that Macbeth is still being produced three hundred years later on the other side of the globe. We have the same anatomical and functional regions in our brains. Parents everywhere use baby talk. On all six populated continents there are countries in which more than half of the population identifies with the Christian religions. For all those similarities, is humanity really going to be split over the Repugnant Conclusion? Even if the Repugnant Conclusion is more of a challenge than muscling past a few inductive biases (scope insensitivity and the attribute substitution heuristic are also universal), I think we have some decent prospect for a future in which you don't have to kill me. Whatever will help us to get to that future, that's what I'm looking for when I say "right". No matter how small our shared values are once we've felt the weight of relevant moral arguments, that's what we need to find.
Jadagul130

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something spe... (read more)

2[anonymous]
In http://lesswrong.com/lw/t0/abstracted_idealized_dynamics/mgr, user steven wrote "When X (an agent) judges that Y (another agent) should Z (take some action, make some decision), X is judging that Z is the solution to the problem W (perhaps increasing a world's measure under some optimization criterion), where W is a rigid designator for the problem structure implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments. (Or at least X is asserting that it's shared.) Due to the nature of W, becoming informed will cause X and Y to get closer to the solution of W, but wanting-it-when-informed is not what makes that solution moral." with which Eliezer agreed. This means that, even though people might presently have different things in mind when they say something is "good", Eliezer does not regard their/our/his present ideas as either the meaning of their-form-of-good or his-form-of-good. The meaning of good is not "the things someone/anyone personally, presently finds morally compelling", but something like "the fixed facts that are found but not defined by clarifying the result of applying the shared human evaluative cognitive machinery to a wide variety of situations under reflectively ideal conditions of information." That is to say, Eliezer thinks, not only that moral questions are well defined, "objective", in a realist or cognitivist way, but that our present explicit-moralities all have a single, fixed, external referent which is constructively revealed via the moral computations that weigh our many criteria. I haven't finished reading CEV, but here's a quote from Levels of Organization that seems relevant: "The target matter of Artificial Intelligence is not the surface variation that makes one human slightly smarter than another human, but rather the vast store of complexity that separates a human from an amoeba". Similarly, the target matter of inferences that figure out the content of morality is not the surfac
Jadagul00

I was a grad student at Churchill, and we mostly ignored such things, but my girlfriend was an undergrad and felt compelled to educate me. I recall Johns being the rich kids, Peterhouse was the gay men (not sure if that's for an actual reason or just the obvious pun), and a couple others that I can't remember off the top of my head.

0Sarokrae
I thought Homerton was the obvious gay pun? And one thing that IS reasonably accurate: New Hall is a female version of Hufflepuff. It is most of the time filled up by the "leftovers" (pooled there)...
Jadagul100

It's mentioned, just not dwelled on. It's mentioned once in passing in each of the first two books:

Sorceror's Stone:

They piled so much homework on them that the Easter holidays weren't nearly as much fun as the Christmas ones.

Chamber of Secrets:

The second years were given something new to think about during the Easter holidays.

And so on. It's just that I don't think anything interesting ever happens during them.

Jadagul320

It occurred to me at some point that Fun Theory isn't just the correct reply to Theodicy; it's also a critical component of any religious theodicy program. And one of the few ways I could conceive of someone providing major evidence of God's existence.

That is, I'm fairly confident that there is no god. But if I worked out a fairly complete version of Fun Theory, and it turned out that this really was the best of all possible worlds, I might have to change my mind.

Jadagul20

I would agree with Karo, I think. I'm actually surprised by how accurate this list of predictions is; it's not at 50% but I'm not sure why we would expect it to be with predictions this specific. (I'm not saying he was epistemically justified, just that he's more accurate than I would have expected).

Following up on Eliezer's point, it seems like the core of his claims are: 1) computers will become smaller and people will have access to them basically 24/7. If you remember that even my cell phone, which is a total piece of crap and cost $10, would look l... (read more)

Jadagul30

Shane, the problem is that there are (for all practical purposes) infinitely many categories the Bayesian superintelligence could consider. They all "identify significant regularities in the environment" that "could potentially become useful." The problem is that we as the programmers don't know whether the category we're conditioning the superintelligence to care about is the category we want it to care about; this is especially true with messily-defined categories like "good" or "happy." What if we train it to d... (read more)

Jadagul00

This talk about metaethics is trying to justify building castles in the clouds by declaring the foundation to be supported by the roof. It doesn't deal with the fundamental problem at all - it makes it worse.

Caledonian, I don't want to speak for Eliezer. But my contention, at least, is that the fundamental problem is insoluble. I claim, not that this particular castle has a solid foundation, but that there exist no solid foundations, and that anywhere you think you've found solid earth there's actually a cloud somewhere beneath it. The fact that you're ... (read more)

Jadagul240

Eliezer: Good post, as always, I'll repeat that I think you're closer to me in moral philosophy than anyone else I've talked to, with the probable exception of Richard Rorty, from whom I got many of my current views. (You might want to read Contingency, Irony, Solidarity; it's short, and it talks about a lot of the stuff you deal with here). That said, I disagree with you in two places. Reading your stuff and the other comments has helped me refine what I think; I'll try to state it here as clearly as possible.

1) I think that, as most people use the wo... (read more)

Jadagul00

Ah, thanks Eliezer, that comment explains a lot. I think I mostly agree with you, then. I suspect (on little evidence) that each one of us would, extrapolated, wind up at his own attractor (or at least at a sparsely populated one). But I have no real evidence for this, and I can't imagine off the top of my head how I would find it (nor how I would find contradictory evidence), and since I'm not trying to build fAI I don't need to care. But what you've just sketched out is basically the reason I think we can still have coherent moral arguments; our attr... (read more)

Jadagul00

Especially given that exposure to different fact patterns could push you in different directions. E.g. suppose right now I try to do what is right_1 (subscripts on everything to avoid appearance of claim to universality). Now, suppose that if I experience fact pattern facts_1 I conclude that it is right_1 to modify my 'moral theory' to right_2. but if I experience fact pattern facts_2 I conclude that it is right_1 to modify to right_3.

Now, that's all well and good. Eliezer would have no problem with that, as long as the diagram commutes: that is, if it... (read more)

Jadagul00

But Mario, why not? In J-morality it's wrong to hurt people, both because I have empathy towards people and so I like them, and because people tend to create net positive externalities. But that's a value judgment. I can't come up with any argument that would convince a sociopath that he "oughtn't" kill people when he can get away with it. Even in theory.

There was nothing wrong with Raskolnikov's moral theory. He just didn't realize that he wasn't a Napoleon.

Jadagul20

Eliezer, I think you come closer to sharing my understanding of morality than anyone else I've ever met. Places where I disagree with you:

First, as a purely communicative matter, I think you'd be clearer if you replaced all instances of "right" and "good" with "E-right" and "E-good."

Second, as I commented a couple threads back, I think you grossly overestimate the psychological unity of humankind. Thus I think that, say, E-right is not at all the same as J-right (although they're much more similar than either is to... (read more)

Jadagul00

Caledonian and Tim Tyler: there are lots of coherent defenses of Christianity. It's just that many of them rest on statements like, "if Occam's Razor comes into conflict with Revealed Truth, we must privilege the latter over the former." This isn't incoherent; it's just wrong. At least from our perspective. Which is the point I've been trying to make. They'd say the same thing about us.

Roko: I sent you an email.

Jadagul00

Doug raises another good point. Related to what I said earlier, I think people really do functionally have prior probability=1 on some propositions. Or act as if they do. If "The Bible is the inerrant word of God" is a core part of your worldview, it is literally impossible for me to convince you this is false, because you use this belief to interpret any facts I present to you. Eliezer has commented before that you can rationalize just about anything; if "God exists" or "The Flying Spaghetti Monster exists" or "reincarnation exists" is part of the machinery you use to interpret your experience, in a deep enough way, your experiences can't disprove it.

Jadagul00

Eliezer: for 'better' vs 'frooter,' of course you're right. I just would have phrased it differently; I've been known to claim that the word 'better' is completely meaningless unless you (are able to) follow it with "better at or for something." So of course, Jadagul_real would say that his worldview is better for fulfilling his values. And Jadagul_hypothetical would say that his worldview is better for achieving his values. And both would (potentially) be correct. (or potentially wrong. I never claimed to be infallible, either in reality o... (read more)

Jadagul80

Steven: quite possibly related. I don't think they're exactly the same (the classic comic book/high fantasy "I'm evil and I know it" villain fits A2, but I'd describe him as amoral), but it's an interesting parallel.

Eliezer: I'm coming more and more to the conclusion that our main area of disagreement is our willingness to believe that someone who disagrees with us really "embodies a different optimization process." There are infinitely many self-consistent belief systems and infinitely many internally consistent optimization processe... (read more)

Jadagul10

Paul Crowley: remember that US markets are much larger than, say, the US economy. From the article:

It depends on the comparison. U.S. GDP is $12 trillion, the total value of traded securities (debt and equity) denominated in U.S. dollars is estimated to be more than $50 trillion, and the global value of traded securities is about $165 trillion.
And $10 trillion isn't where they are now, it's where they will be in four years or so. So while it's a bloody large amount of money, it's unlikely to be more than, say, 5% of traded securities on the market. And that doesn't include stuff like currency holdings.

Jadagul20

Eliezer: I'm finding this one hard, because I'm not sure what it would mean for you to convince me that nothing was right. Since my current ethics system goes something like, "All morality is arbitrary, there's nothing that's right-in-the-abstract or wrong-in-the-abstract, so I might as well try to make myself as happy as possible," I'm not sure what you're convincing me of--that there's no particular reason to believe that I should make myself happy? But I already believe that. I've chosen to try to be happy, but I don't think there's a good ... (read more)

Jadagul00

Joesph: I don't think I added more constraints, though it's a possibility. What extra constraints do you think I added?

As for not salvaging it, I can see why you would say that, but what word should be used to take its place? Mises commented somewhere in On Human Action that we can be philosophical monists and practical dualists; I believe that everything is ultimately reducible to (quasi?-)deterministic quantum physics, but that doesn't mean that's the most efficient way to analyze most situations. When I'm trying to catch a ball I don't try to model t... (read more)

Jadagul20

Joseph Knecht: I think you're missing the point of Eliezer's argument. In your hypothetical, to the extent Eliezer-as-a-person exists as a coherent concept, yes he chose to do those things. Your hypothetical is, from what I can tell, basically, "If technology allows me to destroy Eliezer-the-person without destroying the outer, apparent shell of Eliezer's body, then Eliezer is no longer capable of choosing." Which is of course true, because he no longer exists. Once you realize that "the state of Eliezer's brain" and "Eliezer's... (read more)

Jadagul70

Eliezer: I'll second Hopefully Anonymous; this is almost exactly what I believe about the whole determinism-free will debate, but it's devilishly hard to describe in English because our vocabulary isn't constructed to make these distinctions very clearly. (Which is why it took a 2700-word blog post). Roland and Andy Wood address one of the most common and silliest arguments against determinism: "If determinism is true, why are you arguing with me? I'll believe whatever I'll believe." The fact that what you'll believe is deterministically fixed doesn't affect the fact that this argument is part of what fixes it.

Jadagul140

Interestingly (at least, I think it's interesting), I'd always felt that way about time, before I learned about quantum mechanics. That's what a four-dimensional spacetime means, isn't it? And so science fiction stories that involve, say, changing the past have never made any sense to me. You can't change the past; it is. And no one can come from the future to change now, because the future is as well. Although now that I think about it more, I realize how this makes slightly more sense in this version of many-worlds than it does in a collapse theory.

1chaosmosis
It's nice to know that someone else thought of this stuff as well. Here's what led me to the same conclusion without reading any hard science. I got really obsessed with Zeno's paradox a few months ago and managed to figure all of this out independently, using similar arguments to come to the same conclusion. Time is just change over space. There are lots of parallels between what the arguments made here and what Zeno said. It's not identical, but thinking of Zeno led me to tangents that led me to think of this article. I also read some quote by Einstein in a letter to a friend after the death of a loved one, saying that the death/life distinction is weird because there are space configurations in which people who have already died still exist. That helped too. Some of the stuff on this site also influenced my thought process: www.scottaaaronson.com/writings/ (Pancake is the best one.) And lastly there's a thought experiment meant to "prove" that time exists independently of change which failed miserably once I thought about it so it influenced me to move in the opposite direction. You have three universes, galaxies, planets, rooms, whatever, labelled A B and C. All motion in room A is set to stop every two years and once it's stopped it stays stopped for a year. All motion in room B is set to stop every three years and once it's stopped it stays stopped for a year. All motion in room C is set to stop every six years and once it's stopped it stays stopped for a year. Then, supposedly, when they all finish the sixth year and move on to the seventh year they would all "wake up" at the same time and be able to tell that time passed because their cycles relative to each other would have stopped. My response was to say that it seemed like all time everywhere would stop if they all coincided (assuming that A B and C contained everything in all the universes), but also that the premises were flawed (assuming that A B and C did not contain everything) because a change

Belatedest answer ever: don't think of it as changing the past, think of it as establishing a causal link to an alternate version of the past that had you appear in a time machine (and obeys other constraints, depending on the time travel rules of the story).

Jadagul00

Eliezer: why uncountably infinite? I find it totally plausible that you need an infinite-dimensional space to represent all of configuration space, but needing uncountability seems, at least initially, to be unlikely.

Of course, it would be the mathematician who asks this question...

Jadagul00

Sean: why is that "what utils do"? To the extent that we view utils as the semi-scientific concept from economics, they don't "just sum linearly." To economists utils don't sum at all; you can't make interpersonal comparisons of utility. So if you claim that utils sum linearly, you're making a claim of moral philosophy, and haven't argued for it terribly strongly.

Jadagul50

Eliezer: after wrestling with this for a while, I think I've identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, "3^^^3 isn't large enough" are off-base. If there's some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn't, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we're back to the original problem.

For me, at least, the problem comes down to what 'preference' means. I don't think I h... (read more)

Jadagul120

Eliezer: have you really never heard the "10% of the brain" myth? Here's a link. You can get more by googling the phrase "ten percent brain."

Lots of people who believe in psychic phenomena will make arguments like, "studies show we only use ten percent of our brains. People with psychic powers are probably the ones who've figured out how to use more," or something like that.

And I agree that I've never heard the word 'science' used as a curiousity stopper. It doesn't make sense in context (as opposed to something like "... (read more)

0bigjeff5
The recent move "Inception" includes the 10% of the brain myth. I cringed, since it has been so soundly busted. If you could ignore that particular flaw, it was a really good movie. Unfortunately it is the foundational premise of the movie, so if you couldn't ignore it chances are you'd hate the movie.
Jadagul30

Eliezer: I think another factor is that different kinds of answers are differently useful. If you cast your spell on the train, I might come over and ask you how you did it. I can guarantee that "science" or "technology" wouldn't satisfy my curiousity (partly, I'm sure, because I'm a nerd and enjoy technology). But if you said, "It's this cool device I ordered from Sharper Image for $10,000," that would probably satisfy me, because it answers the relevant question. I can come up with mechanisms by which you could do things... (read more)

Jadagul80

Eliezer: Here's another example similar to ones other people have raised, a story I heard once, that might explain why I think it's an important and useful concept.

Supposedly, in the early nineties when the Russians were trying to transition to a capitalist economy, a delegation from the economic ministry went to visit England, to see how a properly market-based economy would work. The British took them on a tour, among other things, of an open-air fresh foods market. The Russians were shown around the market, and were appropriately impressed. Afterward... (read more)

Jadagul230

Eliezer: I generally like your posts, but I disagree with you here. I think that there's at least one really useful definition of the word emergence (and possibly several useless ones).

It's true, of course (at least to a materialist like me), that every phenomenon emerges from subatomic physics, and so can be called 'emergent' in that sense. But if I ask you why you made this post, your answer isn't going to be, "That's how the quarks interacted!" Our causal models of the world have many layers between subatomic particles and perceived phenome... (read more)

1bigjeff5
I've actually seen a study on these types of jams, though I cannot remember the source. The results were pretty simple and surprising. The research discovered they could create a massive traffic jam on a full but still flowing highway by simply having a single car brake for longer than necessary. The first person would brake for too long, causing the person behind him to brake for slightly longer (he isn't likely to brake for less time than the person ahead of him lest he risk an accident), which continued down the line, a chain reaction. Drivers in the lanes on either side of the initial brake chain would also begin braking as they saw people in the central lane brake, being sensibly cautious during rush hour, which would spread outward from their positions. Eventually traffic would halt, as the people ahead would have to stop completely before being able to move again. I'm sure there was some kind of cutoff threshold regarding how long over the necessary length of time the first person has to break, but it wasn't very long, a second or two would do it during a non-jammed rush hour. It also explains why, once a jam occurs for any reason, it is extremely slow to clear up even after the cause of the jam is long since removed. Pretty shocking really, and certainly not an "emergent phenomena". That's why EY is against using emergence for everything - there absolutely must be a reason, and that reason cannot be "lots of stuff interacts and now we get a traffic jam!" Using emergence as an explanation encourages you to stop thinking about the problem, rather than dig in and figure out why what happened happens. You have unexplained traffic jams - do you call it emergence or try to explain them? The rational thing to do is to try to explain them in a way that allows you to have expectations about future observations. In other words, "Emergence" is an answer looking for a problem.