All of Mass_Driver's Comments + Replies

I just came here to point out that even nuclear weapons were a slow takeoff in terms of their impact on geopolitics and specific wars. American nuclear attacks on Hiroshima and Nagasaki were useful but not necessarily decisive in ending the war on Japan; some historians argue that the Russian invasion of Japanese-occupied Manchuria, the firebombing of Japanese cities with massive conventional bombers, and the ongoing starvation of the Japanese population due to an increasingly successful blockade were at least as influential in the Japanese decision to sur... (read more)

I suspect we're talking about two different things. 

If you just naively program a super-intelligent AI to satisfice a goal, then, sure, most of the candidate pathways to satisfice will involve accruing a lot of some type of power, because power is useful for achieving goals. That's a valid point, and it's important to understand that merely switching from optimizers to satisficers won't adequately protect us against overly ambitious AIs.

However, that doesn't mean that it's futile to explicitly penalize most (but not literally all) of the paths that th... (read more)

Sure, the metaphor is strained because natural selection doesn't have feelings, so it's never going to feel satisfied, because it's never going to feel anything. For whatever it's worth, I didn't pick that metaphor; Eliezer mentions contraception in his original post.

As I understand it, the point of bringing up contraception is to show that when you move from one level of intelligence to another, much higher level of intelligence, then the more intelligent agent can wind up optimizing for values that would be anathema to the less intelligent agents, even i... (read more)

One of my assumptions is that it's possible to design a "satisficing" engine -- an algorithm that generates candidate proposals for a fixed number of cycles, and then, assuming at least one proposal with estimated utility greater than X has been generated within that amount of time, selects one of the qualifying proposals at random. If there are no qualifying candidates, the AI takes no action.

If you have a straightforward optimizer that always returns the action with the highest expected utility, then, yeah, you only have to miss one "cheat" that improves... (read more)

7TurnTrout
There is a special reason, and it's called "instrumental convergence." Satisficers tend to seek power.
2Chris_Leong
You mean quantilization? Oh yeah, I forgot about that. Good point.

Sure, I agree! If we miss even one such action, we're screwed. My point is that if people put enough skill and effort into trying to catch all such actions, then there is a significant chance that they'll catch literally all the actions that are (1) world-ending and that (2) the AI actually wants to try.

There's also a significant chance we won't, which is quite bad and very alarming, hence people should work on AI safety.

3Chris_Leong
Hmm... It seems much, much harder to catch every single one than to catch 99%.

Right, I'm not claiming that AGI will do anything like straightforwardly maximize human utility. I'm claiming that if we work hard enough at teaching it to avoid disaster, it has a significant chance of avoiding disaster.

The fact that nobody is artificially mass-producing their genes is not a disaster from Darwin's point of view; Darwin is vaguely satisfied that instead of a million humans there are now 7 billion humans. If the population stabilizes at 11 billion, that is also not a Darwinian disaster. If the population spreads across the galaxy, mostly in... (read more)

3Rob Bensinger
As stated, I think Eliezer and I, and nearly everyone else, would agree with this. ?? Why would human natural selection be satisfied with 7 billion but not satisfied with a million? Seems like you could equally say 'natural selection is satisfied with a million, since at least a million is higher than a thousand'. Or 'natural selection is satisfied with a hundred, since at least a hundred is higher than fifty'. I understand the idea of extracting from a population's process of natural selection a pseudo-goal, 'maximize inclusive genetic fitness'; I don't understand the idea of adding that natural selection has some threshold where it 'feels' 'satisfied'.

I think we're doing a little better than I predicted. Rationalists seem to be somewhat better able than their peers to sift through controversial public health advice, to switch careers (or retire early) when that makes sense, to donate strategically, and to set up physical environments that meet their needs (homes, offices, etc.) even when those environments are a bit unusual. Enough rationalists got into cryptocurrency early enough and heavy enough for that to feel more like successful foresight than a lucky bet. We're doing something at least partly rig... (read more)

I mostly agree with the reasoning here; thank you to Eliezer for posting it and explaining it clearly. It's good to have all these reasons here in once place.

The one area I partly disagree with is Section B.1. As I understand it, the main point of B.1 is that we can't guard against all of the problems that will crop up as AI grows more intelligent, because we can't foresee all of those problems, because most of them will be "out-of-distribution," i.e., not the kinds of problems where we have reasonable training data. A superintelligent AI will do strange t... (read more)

5Chris_Leong
I had the exact same thought. My guess would be that Eliezer might say that since the AI is maximising if the generalisation function misses even one action of this sort as something that we should exclude that we're screwed.

If natural selection had feelings, it might not be maximally happy with the way humans are behaving in the wake of Cro-Magnon optimization...but it probably wouldn't call it a disaster, either.

Out of a population of 8 billion humans, in a world that has known about Darwin for generations, very nearly zero are trying to directly manufacture large numbers of copies of their genomes -- there is almost no creative generalization towards 'make more copies of my genome' as a goal in its own right.

Meanwhile, there is some creativity going into the proxy goal 'hav... (read more)

I appreciate how much detail you've used to lay out why you think a lack of human agency is a problem -- compared to our earlier conversations, I now have a better sense of what concrete problem you're trying to solve and why that problem might be important. I can imagine that, e.g., it's quite difficult to tell how well you've fit a curve if the context in which you're supposed to fit that curve is vulnerable to being changed in ways whose goodness or badness is difficult to specify. I look forward to reading the later posts in this sequence so that I can... (read more)

6Charlie Steiner
Thanks for the comment :) I don't agree it's true that we have a coherent set of preferences for each environment. I'm sure we can agree that humans don't have their utility function written down in FORTRAN on the inside of our skulls. Nor does our brain store a real number associated with each possible state of the universe (and even if we did, by what lights would we call that number a utility function?). So when we talk about a human's preferences in some environment, we're not talking about opening them up and looking at their brain, we're talking how humans have this propensity to take reasonable actions that make sense in terms of preferences. Example: You say "would you like doritos or an apple?" and I say "apple," and then you use this behavior to update your model of my preferences. But this action-propensity that humans have is sometimes irrational (bold claim I know) and not so easily modeled as a utility function, even within a single environment. The scheme you talk about for building up human values seems to have a recursive character to it: you get the bigger, broader human utility function by building it out of smaller, more local human utility functions, and so on, until at some base level of recursion there are utility functions that get directly inferred from facts about the human. But unless there's some level of human action where we act like rational utility maximizers, this base level already contains the problems I'm talking about, and since it's the base level those problems can't be resolved or explained by recourse to a yet-baser level. Different people have different responses to this problem, and I think it's legitimate to say "well, just get better at inferring utility functions" (though this requires some actual work at specifying a "better"). But I'm going to end up arguing that we should just get better at dealing with models of preferences that aren't utility functions.

Hmm. Nobody's ever asked me to try to teach them that before, but here's my advice:

  1. Think about what dimensions or components success at the task will include. E.g., if you're trying to play a song on the guitar, you might decide that a well-played song will have the correct chords played with the correct fingering and the correct rhythm.
  2. Think about what steps are involved in each of the components of success, with an eye toward ordering those steps in terms of which steps are easiest to learn and which steps are logical prerequisites for the others. E.g.,
... (read more)

I'm curious about the source of your intuition that we are obligated to make an optimal selection. You mention that the utility difference between two plausibly best meals could be large, which is true, especially when we drop the metaphor and reflect on the utility difference between two plausibly best FAI value schemes. And I suppose that, taken literally, the utilitarian code urges us to maximize utility, so leaving any utility on the table would technically violate utilitarianism.

On a practical level, though, I'm usually not in the habit of nitpicking ... (read more)

2Charlie Steiner
Yes, I agree with everything you said... until the last sentence ;) In-parable, your mom is mostly just there as moral support. Neither of you is doing cognitive work the other couldn't. But an aligned AI might have to do a lot of hard work figuring out what some candidate options for the universe even are, and if we want to check its work it will probably have to break big complicated visions of the future into human-comprehensible mouthfuls. So we really will need to extend trust to it - not as in trusting that whatever it picks is the only right thing, but trusting it to make decent decisions even in domains that are too complicated for human oversight.

Thank you for sharing this; there are several useful conceptual tools in here. I like the way you've found crisply different adjectives to describe different kinds of freedom, and I like the way you're thinking about the computational costs of surplus choices. 

Building on that last point a bit, I might say that a savvy agent who has already evaluated N choices could try to keep a running estimate of their expected gains from choosing the best option available after considering X more choices and then compare that gain to their cost of computing the op... (read more)

I agree with this post. I'd add that from what I've seen of medical school (and other high-status vocational programs like law school, business school, etc.), there is still a disproportionate emphasis on talking about the theory of the subject matter vs. building skill at the ultimate task. Is it helpful to memorize the names of thousands of arteries and syndromes and drugs in order to be a doctor? Of course. Is that *more* helpful than doing mock patient interviews and mock chart reviews and live exercises where you try to diagnose a tumor or a... (read more)

I like the style of your analysis. I think your conclusion is wrong because of wonky details about World War 2. 4 years of technical progress at anything important, delivered for free on a silver platter, would have flipped the outcome of the war. 4 years of progress in fighter airplanes means you have total air superiority and can use enemy tanks for target practice. 4 years of progress in tanks means your tanks are effectively invulnerable against their opponents, and slice through enemy divisions with ease. 4 years of progress in manufacturing means you... (read more)

1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that's more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.

2) I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having peopl... (read more)

I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc.

As someone who knows Duncan moderately well in person and has been under his leadership in a few contexts (CFAR instructor training and the recent Dragon Army experiment), I can confirm that this is nowhere close to true. What Duncan is... (read more)

9robot-dreams
I agree that 4 is a concern. I disagree about 2. After having (a) participated in the weekend experiment and (b) done some "back-channel" references on Duncan, my impression is that he hates the fact that leadership will isolate him from the group he really wants to be a part of. I expect that if the experiment is successful, Duncan will eagerly set aside leadership and integrate himself with the group.

1) Thanks.

2) Nope, you're just way off (though I appreciate the candor). I thought about coming up with some sort of epistemically humble "maybe" or "I can see where you got that impression," but it seems more advisable to simply be direct, and to sound as confident as I am. I've been a leader, and I've been a follower, and I've transitioned in both directions within the same contexts, and there's no special draw there along any of the lines you laid out. In particular, I think the statement "this needs to happen, and no one e... (read more)

And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...

"I know [evolution] sounds crazy -- it didn't make sense to me at first either. I can explain how it works if you're curious, but it will take me a long time, because it's a complicated idea with lots of moving parts that you probably haven't seen before. Sometimes even simple questions like 'where did the first humans come from?' turn out to have complicated answers."

0snewmark
Of course it's not actually a simple question, it's really a broad inquiry. In fact it doesn't even need to have an answer and even when it does, it usually alters the question slightly... the hard part is asking the right questions not finding the answer. (It just dawned on me that this was the whole point of The Question in A Hitchhiker's Guide to the Galaxy, thanks for that.)

I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart's calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who's longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

0Qiaochu_Yuan
That seems fine to me. At some point someone might be sufficiently worried about the lack of a cause-neutral rationality organization to start a new one themselves, and that would be probably fine; CFAR would probably try to help them out. (I don't have a good sense of CFAR's internal position on whether they should themselves spin off such an organization.)

Yeah, that pretty much sums it up: do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

Shockingly, as a lawyer who's working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefe... (read more)

6Qiaochu_Yuan
I think this question implicitly assumes as a premise that CFAR is the main vehicle by which the rationality community grows. That may be more or less true now, plausibly it can become less true in the future, but most interestingly it suggests that you already understand the value of CFAR as a coordination point (for rationality in general). That's the kind of value I think CFAR is trying to generate in the future as a coordination point for AI safety in particular, because it might in fact turn out to be that important. I sympathize with your concerns - I would love for the rationality community to be more diverse along all sorts of axes - but I worry they're predicated on a perspective on existential risk-like topics as these luxuries that maybe we should devote a little time to but that aren't particularly urgent, and that if you had a stronger sense of urgency around them as a group (not necessarily around any of them individually) you might be able to have more sympathy for people (such as the CFAR staff) who really, really just want to focus on them, even though they're highly uncertain and even though there are no obvious feedback loops, because they're important enough to work on anyway.

I think a lot of this is fair concern (I care about AI but am currently neutral/undecided on whether this change was a good one)

But I also note that "a couple research institutions" is sweeping a lot of work into deliberately innocuous sounding words.

First - we have lots of startups that aren't AI related that I think were in some fashion facilitated by the overall rationality community project (With CFAR playing a major role in pushing that project forward).

We also have Effective Altruism Global, and many wings of the EA community that have bene... (read more)

Well, like I said, AI risk is a very important cause, and working on a specific problem can help focus the mind, so running a series of AI-researcher-specific rationality seminars would offer the benefit of (a) reducing AI risk, (b) improving morale, and (c) encouraging rationality researchers to test their theories using a real-world example. That's why I think it's a good idea for CFAR to run a series of AI-specific seminars.

What is the marginal benefit gained by moving further along the road to specialization, from "roughly half our efforts these d... (read more)

5Qiaochu_Yuan
Yes, I agree that this is the important question. I think there are benefits around stronger coordination among 1) CFAR staff, 2) CFAR supporters, and 3) CFAR participants around AI safety that are not captured by a quantitative increase in the number of seminars being run or whatever. In the ideal situation, you can try to create a group of people who have common knowledge that everyone else in the group is actually dedicated to AI safety, and it allows them to coordinate better because it allows them to act and make plans under the assumption that everyone else is dedicated to AI safety, at every level of meta (e.g. when you make plans which are contingent on someone else's plans). If CFAR instead continues to publicly present as approximately cause-neutral, these assumptions shatter and people can't rely on each other and coordinate as well. I think it would be pretty difficult to attempt to quantify the benefit of doing this but I'd be skeptical of any confident and low upper bounds. There are also benefits from CFAR signaling that it cares enough about AI safety in particular to drop cause neutrality; that could encourage some people who otherwise might not have to take the cause more seriously.

I dislike CFAR's new focus, and I will probably stop my modest annual donations as a result.

In my opinion, the most important benefit of cause-neutrality is that it safeguards the integrity of the young and still-evolving methods of rationality. If it is official CFAR policy that reducing AI risk is the most important cause, and CFAR staff do almost all of their work with people who are actively involved with AI risk, and then go and do almost all of their socializing with rationalists (most of whom also place a high value on reducing AI risk), then ther... (read more)

7Qiaochu_Yuan
I see here a description of several potential costs of the new focus but no attempt to weigh those costs against the potential benefit.

Does anyone know what happened to TC Chamberlin's proposal? In other words, shortly after 1897, did he in fact manage to spread better intellectual habits to other people? Why or why not?

Thank you! I see that some people voted you down without explaining why. If you don't like someone's blurb, please either contribute a better one or leave a comment to specifically explain how the blurb could be improved.

Again, fair point -- if you are reading this, and you have experience designing websites, and you are willing to donate a couple of hours to build a very basic website, let us know!

Sounds good to me. I'll keep an eye out for public domain images of the Earth exploding. If the starry background takes up enough of the image, then the overall effect will probably still hit the right balance between alarm and calm.

A really fun graphic would be an asteroid bouncing off a shield and not hitting Earth, but that might be too specific.

2CCC
Yes, that would work as well, as long as it's clear to the viewer what is going on.

Great! Pick one and get started, please. If you can't decide which one to do, please do asteroids.

0EngineerofScience
Also, can I write in my asteroid essay the potential helpfullness of asteroids? We belive that one asteroid(just one!) could be worth $1,000,000,000,000. In other words, catching one asteroid could be worth one-trillion dollars. Could I mention that in my hundred word blurb?
4EngineerofScience
I will do asteroids.

It would go to the best available charity that is working to fight that particular existential risk. For example, the 'donate' button for hostile AI might go to MIRI. The donate button for pandemics might go the Center for Disease Control, and the donate button for nuclear holocaust might go to the Global Threat Reduction Initiative. If we can't agree on which agency is best for a particular risk, we can pick one at random from the front-runners.

If you have ideas for which charities are the best for a particular risk, please share them here! That is part of the work that needs to get done.

Hi Dorikka,

Yes, I am also concerned that the banner is too visually complicated -- it's supposed to be a scene of a flooded garage workshop, suggesting both major problems and a potential ability to fix them, but the graphic is not at all iconic. If you have another idea for the banner (or can recommend a particular font that would work better), please chime in.

I am not convinced that www.existential-risk.org is a good casual landing page, because (a) most of the content is in the form of an academic CV, (b) there is no easy-to-read summary telling the reader about existential risks, and (c) there is no donate button.

4CCC
I agree with Dorikka - that banner image is, well, not the best. I did not even notice that the workshop was flooded until I saw you point it out in this post; I thought it merely had a shiny floor and a low workbench (and took no particular notice of either detail). If I may make a recommendation, I would suggest a mostly-black banner, with a few stars (i.e. a view of space) with, on the far right, a picture of Earth blowing up (something along the lines of this image - though, of course, not exactly that image because of copyright, but along those lines). Have the text white, in one image, with a transparent background, left-aligned; and the space/Earth image as a different image behind it, right-aligned; then your banner will still look good on any screen resolution. I think that would make a good, attention-grabbing banner.
8Dorikka
Do you know anyone who has done website design, like as an actual job? May want to ask them. I can really just say whether something does or doesn't look right to me - honestly wouldn't know where to start recommending fonts and stuff.

It's probably "Song of Light," or if you want a more literal translation, "Hymn to Light."

You might be wrestling with a hard trade-off between wanting to do as much good as possible and wanting to fit in well with a respected peer group. Those are both good things to want, and it's not obvious to me that you can maximize both of them at the same time.

I have some thoughts on your concepts of "special snowflake" and "advice that doesn't generalize." I agree that you are not a special snowflake in the sense of being noticeably smarter, more virtuous, more disciplined, whatever than the other nurses on your shift. I'll concede t... (read more)

OK, but why is "chair" shorter than "furniture"? Why is "blue" shorter than "color"? Furniture and color don't strike me as words that are so abstract as to rarely see use in everyday conversation.

0Nornagest
We're venturing into wild speculation territory here, but I suspect that there's a sort of sweet spot of specificity, between adding extraneous details and talking in terms so general that they're only useful for accounting headers or philosophy papers, and that the shortest nouns will fall into the center of it. "We need seventy pieces of furniture for the banquet" is a sentence I'd expect to come up less often than "we need sixty chairs and ten tables". "Furniture" and "color" do show up in everyday conversation, but often in contexts like "what furniture needs repairs?" or "what color did you paint the kitchen?"

I'm confused. What makes "chair" the basic category? I mean, obviously more basic categories will have shorter words -- but who decided that "solid object taking up roughly a cubic meter designed to support the weight of a single sitting human" was a basic category?

3Nornagest
It probably comes down to frequency of use, as Eliezer alludes in the next couple of sentences. Shorter words are easier to use and likely to be preferred, but independently of that, the need to refer to an object sized and designed for one person to comfortably sit on it will likely come up more often than the need for a word for "personal recuperation armature, padded interface surfaces, overstuffed, with position-activated leg supports". There's no Platonic ideal of chairness of which reclinerness is a subclass, but there are facts of the social and physical environment in which language evolves. The category breakdown is arbitrary at some level, but the tendency to prefer more general to more specific categories is real, and so is the association with length. Japanese aoi covers more ground than English blue, but both languages have analogs of "sky blue" -- and they're both longer than the base word.

That's an important warning, and I'm glad you linked me to the post on ethical inhibitions. It's easy to be mistaken about when you're causing harm, and so allowing a buffer in honor of the precautionary principle makes sense. That's part of why I never mention the names of any of my clients in public and never post any information about any specific client on any public forums -- I expect that most of the time, doing so would cause no harm, but it's important to be careful.

Still, I had the sense when I first read your comment six weeks ago that it's not a... (read more)

It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They

... (read more)
3Shmi
The rest of the article is also well worth the read.

You're...welcome? For what it's worth, mainstream American legal ethics try to strike a balance between candor and advocacy. It's actually not OK for lawyers to provide unabashed advocacy; lawyers are expected to also pay some regard to epistemic accuracy. We're not just hired mercenaries; we're also officers of the court.

In a world that was full of Bayesian Conspiracies, where people routinely teased out obscure scraps of information in the service of high-stakes, well-concealed plots, I would share your horror at what you describe as "disclosing pe... (read more)

4wedrifid
Your ethical intent sounds fine but that is of limited use without competence. The sort of casual disclosure described in the ancestor anecdote would make me slightly downgrade my evaluation of the trustworthiness and social competence of any professional that works with sensitive information. Much like those observed casually gossiping about other people at inappropriate times will be silently downgraded as potential confidants. The overwhelming majority of minor ethical transgressions that we make will "do no harm". Some do. If the consequences were that easy to predict we wouldn't need ethical inhibitions in the first place.

Is that revelation grounds for a lawsuit, a criminal offense or merely grounds for disbarment?

None of the above, really, unless you have so few murder cases that someone could plausibly guess which one you were referring to. I work with about 100 different plaintiffs right now, and my firm usually accepts any client with a halfway decent case who isn't an obvious liar. Under those conditions, it'd be alarming if I told you that 100 out of 100 were telling the truth -- someone's bound to be at least partly faking their injury. I don't think it undermines... (read more)

6wedrifid
I will take you at your word that you could get away with making such disclosures. You are the lawyer and so the expert at judging what ethical violations people can technically get away with. I have to thank you for allowing me to update my expectations regarding the ethical standards I can expect from an average legal representative. I now know I will need to filter more aggressively myself and not rely on the system to provide what I would otherwise have taken to be the most rudimentary standards of integrity I need from someone in that role. (That's a sincere thankyou, not snide pettiness. I really was confused about what that social rules the legal subculture would at least enforce lip-service to adherence to.) 'Abstract' does not mean what you think it means. You are revealing concrete information that is slightly vague. You believe this is OK and as such can be trusted much less with private information. I still may (hypothetically) recommend someone use the services of someone with your beliefs about what constitutes acceptable disclosure of confidential information, but only if their fees are sufficiently low relative to their other competencies as to offset this liability. It wouldn't be alarming at all. It would sound exactly equivalent to "No comment". It'd sound like you were doing your job (albeit more awkwardly than if you had just shut your mouth and signaled tact). If you choose to speak about the guilt of your clients and choose to reveal anything less than the token "My clients are Resistance), not Spies" then you are disclosing personal information. Because mathematics. I usually abhor bullshit (advocacy with casual indifference to epistemic accuracy). Lawyers represent a notable exception, where unabashed advocacy for each side is the least bad option I know of for minimising injustice.

I'm confused about how this works.

Suppose the standard were to use 80% confidence. Would it still be surprising to see 60 of 60 studies agree that A and B were not linked? Suppose the standard were to use 99% confidence. Would it still be surprising to see 60 of 60 studies agree that A and B were not linked?

Also, doesn't the prior plausibility of the connection being tested matter for attempts to detect experimenter bias this way? E.g., for any given convention about confidence intervals, shouldn't we be quicker to infer experimenter bias when a set of stu... (read more)

9PhilGoetz
"95% confidence" means "I am testing whether X is linked to Y. I know that the data might randomly conspire against me to make it look as if X is linked to Y. I'm going to look for an effect so large that, if there is no link between X and Y, the data will conspire against me only 5% of the time to look as if there is. If I don't see an effect at least that large, I'll say that I failed to show a link between X and Y." If you went for 80% confidence instead, you'd be looking for an effect that wasn't quite as big. You'd be able to detect smaller clinical effects--for instance, a drug that has a small but reliable effect--but if there were no effect, you'd be fooled by the data 20% of the time into thinking that there was. It would if the papers claimed to find a connection. When they claim not to find a connection, I think not. Suppose people decided to test the hypothesis that stock market crashes are caused by the Earth's distance from Mars. They would gather data on Earth's distance from Mars, and on movements in the stock market, and look for a correlation. If there is no relationship, there should be zero correlation, on average. That (approximately) means that half of all studies will show a negative correlation, and half will have positive correlation. They need to pick a number, and say that if they find a positive correlation above that number, they've proven that Mars causes stock market crashes. And they pick that number by finding the correlation just exactly large enough that, if there is no relationship, it happens 5% of the time by chance. If the proposition is very very unlikely, somebody might insist on a 99% confidence interval instead of a 95% confidence interval. That's how prior plausibility would affect it. Adopting a standard of 95% confidence is really a way of saying we agree not to haggle over priors.

Yes, Voldemort could probably teach DaDA without suffering from the curse, and a full-strength Voldemort with a Hogwarts Professorship could probably steal the stone.

I'm not sure either of those explains how Voldemort got back to full-strength in the first place, though. Did Voldemort fake the charred hulk of his body? And Harry forgot that apparent charred bodies aren't perfectly reliable evidence of a dead enemy because his books have maxims like "don't believe your enemy is dead until you see the body?" But then what was Voldemort doing between 1975 and 1990? He was winning the war until he tackled Harry; why would he suddenly decide to stop?

2Aureateflux
I've been leaning away from the idea of Quirrel being Voldemort because there are so many differences between him and canon!Quirrel... They don't appear to be the same person and the details of Quirrel's affliction are different. At the very least, the possession is different, either for a fundamental reason or because HPMOR!Quirrel is more capable of resisting Voldemort. This leads to a few hypotheses: 1) Quirrel is not possessed at all and suffers from some unrelated affliction, such as the side effects of a dark ritual. (Doesn't discount the possibility of Quirrel actually BEING Voldemort, no need for possession, depending on circumstances of his 'death') 2) Quirrel is possessed by Voldemort, but is able to resist in a way that causes or exacerbates the zombie state 2a) Quirrel is slowly losing against Voldemort (explanation for increasing frequency of zombie state) 2b) Quirrel actually overpowered Voldemort after he was possessed and counter possessed Voldemort, thereby gaining Voldemort's various resources (Voldemort rallying might explain increased frequency of zombie state) 3) The method of possession is somehow different, causing different symptoms. Keep in mind that the only actual evidence for HPMOR!Quirrel being Voldemort is the proximity-based sense-of-doom and the problems with casting spells on each other. This is actually quite different from what happens in canon, where the issue is with the wands, not their persons. Also, the clash between the Patronus and the Killing Curse didn't cause the Priori thing to happen. So the doom feeling could have a number of different explanations while the spell-casting issue doesn't seem to be the same as that of canon (and even if it were, that's only evidence of Quirrel using Voldemort's wand, not actually of BEING Voldemort... And wasn't the location of Voldemort's wand what Bellatrix was trying to tell Harry during the escape?). It seems to me that if Voldemort isn't actually the referent of the Prophecy (

Puzzle:

Who is ultimately in control of the person who calls himself Quirrell?

  • Voldemort

If Voldemort is possessing the-person-pretending-to-be-Quirrell using the path Dumbledore & co. are familiar with, or for that matter by drinking unicorn blood, then why isn't Voldy's magic noticeably weaker than before? Quirrell seems like he could at least hold his own against Dumbledore, and possibly defeat him.

If Voldemort took control of the-person-pretending-to-be-Quirrell's body outright using incredibly Dark magic, then why would Quirrell openly suggest th... (read more)

3DanielH
It is possible, though unlikely given his increasing zombieness, that "Quirrell" has found a way around Voldemort's curse. The one that comes to mind is that Voldemort cursed the Defense against the Dark Arts position. Quirrell is teaching Battle Magic, not Defense against the Dark Arts, so he may be immune. Similarly, if Quirrell is Voldemort, he may be able to counter his own curse (or have put a check for himself or a loophole on the curse); if Canon!Voldemort had thought of that, he may have been able to successfully steal the Stone.

Is there more to the Soylent thing than mixing off-the-shelf protein shake powder, olive oil, multivitamin pills, and mineral supplement pills and then eating it?

3RomeoStevens
Not really. In fact I'm beginning to think that the Soylent guy is obfuscating his source of supplies in order to obfuscate how simple it is. I found a powder that is 100% of everything for $1 a scoop at costco.

Isn't there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?

I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of yo... (read more)

6Richard_Kennaway
Yes, the question is what that middle ground looks like -- how you actually come up with new models. Gelman and Shalizi say it's a non-Bayesian process depending on human judgement. The behaviour that you rightly say is absurd, of the Bayesian Flying Dutchman, is indeed Shalizi's reductio ad absurdum of universal Bayesianism. I'm not sure what gwern has just been arguing, but it looks like doing whatever gets results through the week while going to the church of Solomonoff on Sundays. An algorithmic method of finding new hypotheses that works better than people is equivalent to AGI, so this is not an issue I expect to see solved any time soon.

What's the percent chance that I'm doing it wrong?

The whole quote:

If you're not making quantitative predictions, you're probably doing it wrong, or you're probably not doing it as well as you can. That's sort of become kind of critical to how we operate. You have to predict in advance. Anybody can explain anything after the fact, and it has to be quantitative or you're not being serious about how you're approaching the problem.

The problems you face might not require a serious approach; without more information, I can't say.

5DanArmak
78.544%.

I once heard a story about the original writer of the Superman Radio Series. He wanted a pay rise, his employers didn't want to give him one. He decided to end the series with Superman trapped at the bottom of a well, tied down with kryptonite and surrounded by a hundred thousand tanks (or something along these lines). It was a cliffhanger. He then made his salary demands. His employers refused and went round every writer in America, but nobody could work out how the original writer was planning to have Superman escape. Eventually the radio guys had to go

... (read more)
9Fronken
Story ... too awesome ... not to upvote ... not sure why its rational, though.
2Richard_Kennaway
I think this is an updating of the cliché from serial adventure stories for boys, where an instalment would end with a cliffhanger, the hero facing certain death. The following instalment would resolve the matter by saying "With one bound, Jack was free." Whether those exact words were ever written is unclear from Google, but it's a well-known form of lazy plotting. If it isn't already on TVTropes, now's your chance.
7CronoDAS
Speaking of writing yourself into a corner... According to TV Tropes, there was one show, "Sledge Hammer", which ended its first season with the main character setting off a nuclear bomb while trying to defuse it. They didn't expect to be renewed for a second season, so when they were, they had a problem. This is what they did: Previously on Sledge Hammer: [scene of nuclear explosion] Tonight's episode takes place five years before that fateful explosion.
2Eliezer Yudkowsky
There's so many different ways that story couldn't possibly be true... (EDIT: Ooh, turns out that the Superman Radio program was the one that pulled off the "Clan of the Fiery Cross" punch against the KKK.)

Ironically, this is my most-upvoted comment in several months.

OK, so how else might we get people to gate-check the troublesome, philosophical, misleading parts of their moral intuitions that would have fewer undesirable side effects? I tend to agree with you that it's good when people pause to reflect on consequences -- but then when they evaluate those consequences I want them to just consult their gut feeling, as it were. Sooner or later the train of conscious reasoning had better dead-end in an intuitively held preference, or it's spectacularly unlikely to fulfill anyone's intuitively held preferences. (I, of cou... (read more)

0deathpigeon
Am I to understand that you're suggesting that we apply awesomeness to the consequences, and not the actions? Because that would be different from what I thought was being implied by saying "'Awesome' is implicitly consequentialist." What I took that to mean is that, when one looks at an action, and decides whether or not it is awesome, the person is determining whether or not the consequences are something that they find desirable. That is distinct from looking at consequences and determining whether or not the consequences are awesome. That requires one to ALREADY be looking at things consequentially. I think that, after thinking of things, when people use the term "awesome" they use it differently depending on how they view the world. If someone is already a consequentialist, that person will look at things consequentially when using the word awesome. If someone is already a dentologist, that person will look at the fulfillment of duties when using the word awesome. This is just a hypothesis, and I'm not very certain that it's true, at the moment. I'm not entirely sure how to prompt that sort of behavior, to be honest.

Given at least moderate quality, upvotes correlate much more tightly with accessibility / scope of audience than quality of writing. Remember, the article score isn't an average of hundreds of scalar ratings -- it's the sum of thousands of ratings of [-1, 0, +1] -- and the default rating of anyone who doesn't see, doesn't care about, or doesn't understand the thrust of a post is 0. If you get a high score, that says more about how many people bothered to process your post than about how many people thought it was the best post ever.

4Mass_Driver
Ironically, this is my most-upvoted comment in several months.
6khafra
Yes, to counter this effect I tend to upvote the math-heavy decision theory posts and comment chains if I have even the slightest idea what's going on, and the Vladimirs seem to think it's not stupid.

OK, let's say you're right, and people say "awesome" without thinking at all. I imagine Nyan_Sandwich would view that as a feature of the word, rather than as a bug. The point of using "awesome" in moral discourse is precisely to bypass conscious thought (which a quick review of formal philosophy suggests is highly misleading) and access common-sense intuitions.

I think it's fair to be concerned that people are mistaken about what is awesome, in the sense that (a) they can't accurately predict ex ante what states of the world they will w... (read more)

2deathpigeon
Those are both good points. I view it as a bug because I feel like too much ethical thought bypasses conscious thought to ill affect. This can range from people not thinking about the ethics homosexuality because their pastor tells them its a sin to not thinking about the ethics of invading a country because people believe they are responsible for an attack of some kind, whether they are or not. However, Nyan_Sandwich's ethics of awesome does appear to bypass such problems, to an extent. It's hardly s, but it appears like it would do its job better than many other ethical systems in place today. I should note that it wasn't ever intended to be a very strong objection. As a matter of fact, the original objection wasn't to the conclusions made, but to the path taken to get to them. If an argument for a conclusion I agree with is faulty, I usually attempt to point out the faults in the argument so that the argument can be better. Also, I apologize for taking so long to respond. life (and Minecraft playing) interfered with me checking LessWrong, and I'm not yet used to checking it regularly as I'm new here.

To say that something's 'consequentialist' doesn't have to mean that it's literally forward-looking about each item under consideration. Like any other ethical theory, consequentialism can look back at an event and determine whether it was good/awesome. If you going white-water rafting was a good/awesome consequence, then your decision to go white-water rafting and the conditions of the universe that let you do so were good/awesome.

0deathpigeon
That misses my point. When people say awesome, they don't think back at the consequences or look forward for consequences. People say awesome without thinking about it AT ALL.

Also, this book was a horrible agglomeration of irrelevant and un-analyzed factoids. If you've already read any two Malcolm Gladwell books or Freakonomics, It'd be considerably more educational to skip this book and just read the cards in a Trivial Pursuit box.

The undergrad majors at Yale University typically follow lukeprog's suggestion -- there will be 20 classes on stuff that is thought to constitute cutting-edge, useful "political science" or "history" or "biology," and then 1 or 2 classes per major on "history of political science" or "history of history" or "history of biology." I think that's a good system. It's very important not to confuse a catalog of previous mistakes with a recipe for future progress, but for the same reasons that general hi... (read more)

Load More