You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

What are your contrarian views?

10 Post author: Metus 15 September 2014 09:17AM

As per a recent comment this thread is meant to voice contrarian opinions, that is anything this community tends not to agree with. Thus I ask you to post your contrarian views and upvote anything you do not agree with based on personal beliefs. Spam and trolling still needs to be downvoted.

Comments (806)

Sort By: Popular
Comment author: pinkocrat 12 October 2014 04:06:06PM *  1 point [-]

[Please read the OP before voting. Special voting rules apply.]

As long as you get the gist (think in probability instead of certainty, update incrementally when new evidence comes along), there's no additional benefit to learning Bayes' Theorem.

Comment author: hikari 09 October 2014 07:34:37PM 1 point [-]

[Read the OP before voting for special voting rules.]

The many worlds interpretation of quantum mechanics is categorically confused nonsense. Its origins lie in a map/territory confusion, and in the mind projection fallacy. Configuration space is a map, not territory—it is an abstraction used for describing the way that things are laid out in physical space. The density matrix (or in the special case of pure states, the state vector, or the wave function) is a subjective calculational tool used for finding probabilities. It's something that exists in the mind. Any 'interpretation' of quantum mechanics which claims that any of these things exists in reality (e.g. MWI) therefore commits the mind projection fallacy.

Comment author: [deleted] 26 September 2014 02:06:37AM 1 point [-]

While I am not pro-wireheading and I expect this to be only a semi-contrarian position here...

Happiness is actually far more important than people give it credit for, as a component of a reflectively coherent human utility function. About two thirds of all statements made of the form, "$HIGHERORDERVALUE is more important than being happy!" are reflectively incoherent and/or pure status-signaling. The basic problem that needs addressing is of distinction between simplistic pleasures and a genuinely happy life full of variety, complexity, and subtlety, but the signaling games keep this otherwise obvious distinction from entering the conversation simply because happiness of all kinds is signaled to be low-status.

Comment author: Eitan_Zohar 22 September 2014 09:11:55PM *  1 point [-]

History isn't over in any Fukuyamian sense; in fact the turmoil of the twenty-first century will dwarf the twentieth. A US-centered empire will likely take shape by century's end.

I will elaborate if requested.

Comment author: FiftyTwo 21 September 2014 01:08:39AM 9 points [-]

[Contrarian thread special voting rules]

I bite the bullet on the repugnant conclusion

Comment author: FiftyTwo 21 September 2014 01:08:09AM 9 points [-]

[Contrarian thread special voting rules]

I would not want to be cryonically frozen and resurrected as my sense of who I am is tied into social factors that would be lost

Comment author: jaime2000 21 September 2014 02:05:27AM *  4 points [-]

Would you be willing to freeze if your family did? Your friends and family? Your whole country? Or even if everyone in the world was preserved, would you expect the structure of society post-resurrection be different enough that you would refuse preservation?

Comment author: FiftyTwo 21 September 2014 10:51:54AM 2 points [-]

I'm not usre about the friends and family examples, it would depend what I thought that future society would be like. If cryonics was the norm I probably wouldn't opt out of it because I would have reasonable expectation of, if resurrection was successful, there being other people in the same situation so there would be infrastructure to support us.

The social factors I'm thinking of include the skills, qualifications and experience that I have developed in my life, which would likely be irrelevant in a world that can resurrect me. At best I would be a historical curiosity with nothing to contribute.

Comment author: blacktrance 21 September 2014 02:48:51AM 6 points [-]

[Please read the OP before voting. Special voting rules apply.]

Moral realism is true.

Comment author: [deleted] 21 September 2014 02:59:19AM *  5 points [-]

[Please read the OP before voting. Special voting rules apply.]

The necessary components of AGI are quite simple, and have already been worked out in most cases. All that is required is a small amount of integrative work to build the first UFAI.

Comment author: Azathoth123 21 September 2014 08:35:03PM 1 point [-]

What do you mean by that. Technical all that is required is the proper arrangement of transistors.

Comment author: [deleted] 21 September 2014 10:44:32PM 1 point [-]

I mean that the component pieces such as planning algorithms, logic engines, pattern extractors, evolutionary search, etc. have already been worked out, and that there exist implementable designs for combining these pieces together into an AGI. There aren't any significant known unknowns left to be resolved.

Comment author: Azathoth123 23 September 2014 01:47:58AM 3 points [-]

Then where's the AI?

Comment author: [deleted] 23 September 2014 01:59:15AM *  3 points [-]

All the pieces for bitcoin were known and available in 1999. Why did it take 10 years to emerge?

Comment author: FiftyTwo 21 September 2014 12:47:55AM 5 points [-]

[Contrarian thread, special voting rules apply]

Engaging in political processes (and learning how to do so) is a useful thing, and is consistently underrated by the LW consensus.

Comment author: shminux 23 September 2014 08:44:37PM 4 points [-]

Just a reminder, the local meme "politics is the mind killer" is an injunction not against discussing politics, but against using political examples in a non-political argument.

Comment author: FiftyTwo 23 September 2014 10:27:08PM 3 points [-]

Agreed. But there is also a generally negative attitude towards politics

Comment author: TheAncientGeek 20 September 2014 02:44:53PM *  3 points [-]

[ Please read the OP before voting. Special voting rules apply.]

MWI is wrong, and relational QM is right.

Physicalism is wrong, because of the mind body problem, and other considerations, and dual aspect neutral monism is right.

STEM types are too quick to reject ethical Objectivism. Moreover moral subjectivism is horribly wrong. Don't know what the right answer is, but it could be some kind of Kantianism or Contractarianism.

Arguing to win is good, or to be precise, it largely coincides with truth seeking,

There is no kind of smart that makes you uniformly good at everything.

Even though philosophy has no established body of facts, it is possible to be bad at philosophy and make mistakes in it. Scientists who try to solve longstanding philosophical problems in their lunch breaks end up making fools of themselves. Philosophy is not broken science.

A physicalistically respectable form of free will is defensible.

Bayes is oversold, Quantifying  what you haven't first understood is pointless. Being a good rationalist at the day to day level has a lot to do with  noticing your own biases, and with emotional maturity, than mental arithmetic.

MIRI hasn't made a strong case for AI dangers.

The standard theism/atheism debate is stale, broken and pointless..people who cant understand metaphysics arguing with people who believe it but cant articulate it.

All epistemological positions boil down to fundamental uproveable, intuitions. Empiricism doesn't escape betause it is based on the intuition that if you can see something, it is really there. STEM types have an overly optimistic view of their existed8logo, because they are accelerated out of worrying about fundamental issues.

Rationality is more than one thing.

Comment author: ChristianKl 20 September 2014 04:58:11PM 2 points [-]

Too much statements in a single post.

Comment author: polymathwannabe 20 September 2014 05:59:55PM 1 point [-]

There are so many problems with this post I wish I could vote several times.

One example: how can you claim both "A physicalistically respectable form of free will is defensible" and "Physicalism is wrong?"

Comment author: Kaninchen 19 September 2014 04:13:34PM 17 points [-]

[Please read the OP before voting. Special voting rules apply.]

It would be of significant advantage to the world if most people started living on houseboats.

Comment author: DanielLC 20 September 2014 09:11:27PM 1 point [-]

Is there even enough coast for that?

If people didn't live in cities, they'd have to commute more. There would be a large increase in transportation costs.

Comment author: Kaninchen 19 September 2014 04:11:15PM 15 points [-]

[Please read the OP before voting. Special voting rules apply.]

There probably exists - or has existed at some time in the past - at least one entity best described as a deity.

Comment author: FiftyTwo 21 September 2014 12:53:50AM 1 point [-]

Define deity?

Comment author: Risto_Saarelma 19 September 2014 05:05:25AM *  6 points [-]

[Please read the OP before voting. Special voting rules apply.]

You can expect to have about as much success effectively and systematically teaching rationality as you could in effectively and systematically teaching wisdom. Attempts for a systematic rationality curriculum will end up as cargo cultism and hollow ingroup signaling at worst and heuristics and biases research literature scholarship at best. Once you know someone's SAT score, knowing whether they participated in rationality training will give very little additional predictive power on whether they will win at life.

Comment author: John_Maxwell_IV 19 September 2014 08:00:39AM 3 points [-]

I'd like to hear a more substantive argument if you've got one. Do you think there are few general-purpose life skills (e.g. those purportedly taught in Getting Things Done, How to Win Friends and Influence People, etc.)? What's your best evidence for this?

Comment author: Risto_Saarelma 19 September 2014 03:15:49PM 1 point [-]

I think that there is a huge unseen component in life skills where in addition to knowing about a skill, you need to recognize a situation where the skill might apply, remember about the skill, figure out if the skill is really appropriate given what's going on, know exactly how you should apply the skill in that given situation and so on. There isn't really an algorithm you can follow without also constantly reflecting on what is actually going on, and I think that in what basically looks like another instance of Moravec's paradox, the big difficult part is actually in the unconscious situation awareness and the things you can write in a book like GTD and give to people are a tiny offshoot on that.

No solid evidence for this except for the observation that there don't seem to be self-helpy systems for general awesomeness that actually do consistently make people who stick with them more awesome.

Comment author: John_Maxwell_IV 20 September 2014 03:51:24AM 1 point [-]

recognize a situation where the skill might apply, remember about the skill

OK, what if you were to, say, at the end of each day brainstorm situations during the day when skill X could have been useful in order to get better at recognizing them?

There isn't really an algorithm you can follow without also constantly reflecting on what is actually going on

Could meditation be useful for this?

Comment author: [deleted] 19 September 2014 12:28:03PM 1 point [-]

[Please read the OP before voting. Special voting rules apply.]

Homeownership is not a good idea for most people.

Comment author: Ronak 18 September 2014 09:42:29PM 6 points [-]

[Please read the OP before voting. Special voting rules apply.]

Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.

(I have a large interval for how controversial this is, so pardon me if you think it's not.)

Comment author: Azathoth123 18 September 2014 11:18:29PM 5 points [-]

Do you mean humanities in the abstract or the people currently occupying humanities departments?

Comment author: Princess_Stargirl 18 September 2014 08:25:17PM 2 points [-]

the United States prison system is a tragedy on par or exceeding the horror of the Soviet gulags. In my opinion the only legitimate reason for incarcerating people is to prevent crime. The USA currently has 7 times the OECD average number of prisoners and crime rates similar to the OECD average. 6/7 of the Us penial system population is a little over 2 million people. If we are unnescesarily incarcerating anywhere close to 2 million people right now then the USA is a morally hellish country.

note: Less than half of the inamtes in the USa are there for drug related charges. It is very close to 50% federally but less at the state level. Immediately pardoning all criminals primarily gets us to 3.5 times the OECd average.

Comment author: Azathoth123 18 September 2014 11:33:08PM 2 points [-]

Is your claim that they're in prison for crimes they didn't commit, or that we should let more crimes go unpunished?

Comment author: Lumifer 19 September 2014 02:17:47AM 5 points [-]

I'm not the OP, but I'll throw a quote into this thread:

There's no way to rule innocent men. The only power any government has is the power to crack down on criminals. Well, when there aren't enough criminals, one makes them. One declares so many things to be a crime that it becomes impossible for men to live without breaking laws.

Comment author: Azathoth123 19 September 2014 05:58:35AM *  2 points [-]

So which crimes would you take off the books and what percent of prisoners would that remove?

Comment author: Lumifer 19 September 2014 02:34:33PM 5 points [-]

We can start with the drug war, things like civil forfeiture, and go on from there. You might be interested in this book.

The problems with the US criminal justice system go much deeper than just the abundance of laws, of course.

Comment author: Azathoth123 19 September 2014 11:20:31PM 2 points [-]

things like civil forfeiture

Civil forfeiture doesn't fill prisons.

You might be interested in this book.

The problem with having to many felonies is not that prisons get filled with people being punished for silly things, it's that the people who do get punished for silly things tend to correlate with the people actively opposing the current administration.

Comment author: Lumifer 20 September 2014 12:41:58AM 1 point [-]

There are a LOT of problems with having too many felonies, but that's a large discussion not quite in the LW bailiwick...

Comment author: Azathoth123 20 September 2014 01:57:00AM 1 point [-]

Agreed, but the discussion was about there supposedly being too many people in prison.

Comment author: shminux 18 September 2014 11:24:46PM 2 points [-]

This seems close to the (liberal) mainstream. Why do you think it is contrarian on LW?

Comment author: Princess_Stargirl 19 September 2014 01:28:19AM 4 points [-]

I do not think most people consider this a problem on the par of the Soviet Gulag. Though possibly I am wrong.

Comment author: Lumifer 19 September 2014 02:14:08AM 3 points [-]

The problem with the Soviet Gulag wasn't so much its size, but rather the whole system it was part of and things which got you sent to it.

Comment author: AABoyles 17 September 2014 04:04:14PM 11 points [-]

The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.

Comment author: Ronak 18 September 2014 09:50:32PM 1 point [-]

Why? This looks as if you're taking a hammer to Ockham's razor.

Comment author: AABoyles 19 September 2014 01:40:44PM 1 point [-]

In the strictest sense, yes I am. I design, build and test social models for a living (so this may simply be a case of me holding Maslow's Hammer). The universe exhibits a number of physical properties which resemble modeling assumptions. For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn't enforce upper bounds--what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability.

On any given day, I'll instantiate thousands of models. Having many models running in parallel is useful! We observe one universe, but if there's a non-zero probability that the universe is a model of something else (a possibility which Ockham's Razor certainly doesn't refute), the fact that I generate so many models is indicative of the possibility that a super-universal process or entity may be doing the same thing, of which our universe is one instance.

Comment author: btrettel 25 September 2014 05:07:57PM *  1 point [-]

I do think its useful to use what we know about simulations to inform whether or not we live in one. As I said in my other comment, I don't think a finite speed of light, etc., says much either way, but I do want to note a few things that I think would be suggestive.

If time was discrete and the time step appeared to be a function of known time step limits (e.g., the CFL condition), I would consider that to be good evidence in favor of the simulation hypothesis.

The jury is still out whether time is discrete, so we can't evaluate the second necessary condition. If time were discrete, this would be interesting and could be evidence for the simulation hypothesis, but it'd be pretty weak. You'd need something further that indicates something how the algorithm, like the time step limit, to make a stronger conclusion.

Another possibility is if some conservation principle were violated in a way that would reduce computational complexity. In the water sprinkler simulations I've run, droplets are removed from the simulation when their size drops below a certain (arbitrary) limit as these droplets have little impact on the physics, and mostly serve to slow down the computation. Strictly speaking, this violates conservation of mass. I haven't seen anything like this in physics, but its existence could be evidence for the simulation hypothesis.

Comment author: btrettel 23 September 2014 08:40:34PM *  2 points [-]

For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn't enforce upper bounds--what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability.

This is not true in general. I've considered a similar idea before, but as a reason to believe we don't live in a simulation (not that I think this is a very convincing argument). I work in computational fluid dynamics. "Low-Mach"/incompressible fluid simulations where the speed of sound is assumed infinite are much more easily tractable than the same situation run on a "high Mach" code, even if the actual fluid speeds are very subsonic. The difference of running time is at least an order of magnitude.

To be fair, it can go either way. The speed of the fluid is not "absolutely bounded" in these simulations. These simulations are not relativistic, and treating them as that would make things more complicated. The speed of acoustic waves, however, is treated infinite in the low Mach limit. I imagine there are situations in other branches of mathematical physics where treating a speed as infinite (as in the case of acoustic waves) or zero (as in the non-relativistic case) simplifies certain situations. In the end, it seems like a wash to me, and this offers little evidence in favor or against the simulation hypothesis.

Comment author: AABoyles 24 September 2014 02:03:32PM 1 point [-]

Huh. It never occurred to me that imposing finite bounds might increase the complexity of a simulation, but I can see how that could be true for physical models. Is the assumption you're making in the Low Mach/incompressible fluid models that the speed of sound is explicitly infinite, or is it that the speed of sound lacks an upper bound? (i.e., is there a point in the code where you have to declare something like "sound.speed = infinity"?)

Anyway, I've certainly never encountered any such situation in models of social systems. I'll keep an eye out for it now. Thanks for sharing!

Comment author: Lumifer 24 September 2014 02:50:33PM 2 points [-]

It never occurred to me that imposing finite bounds might increase the complexity of a simulation

As a trivial point, imposing finite bounds means that you can't use the normal distribution, for example :-)

Comment author: btrettel 24 September 2014 02:45:09PM 2 points [-]

Glad you found my post interesting. I found yours interesting as well, as I thought I was the only one who made any argument along those lines.

There's no explicit step where you say the speed of sound is infinite. That's just the net effect of how you model the pressure field. In reality, the pressure comes from thermodynamics at some level. In the low-Mach/incompressible model, the pressure only exists to enforce mass conservation, and in some sense is "junk" (though still compares favorably against exact solutions). Basically, you do some math to decouple the thermodynamic and "fluctuating" pressure (this is really the only change; the remainder are implications of the change). You end up with a Poisson equation for ("fluctuating") pressure, and this equation lacks the ability to take into account finite pressure/acoustic wave speeds. The wave speed is effectively infinite.

To be honest, I need to read papers like this to gain a fuller appreciation of all the implications of this approximation. But what I describe is accurate if lacking in some of the details.

In some ways, this does make things more complicated (pressure boundary conditions being one area). But in terms of speed, it's a huge benefit.

Here's another example from my field: thermal radiation modeling. If you use ray tracing (like 3D rendering) then it's often practical to assume that the speed of light is infinite, because it basically is relative to the other processes you are looking at. The "speed" of heat conduction, for example, is much slower. If you used a finite wave speed for the rays then things would be much slower.

Comment author: [deleted] 17 September 2014 04:19:15PM *  3 points [-]

I'm upvoting top-level comments which I think are in the spirit of this post but I personally disagree with (in the case of comments with several sentences, if I disagree with their conjunction), downvoting ones I don't think are in the spirit of this post (e.g. spam, trolling, views which are clearly not contrarian either on LW nor in the mainstream), and leaving alone ones which are in the spirit of this post but I already agree with. Is that right?

What about comments I'm undecided about? I'm upvoting them if I consider them less likely than my model of the average LWer does and leaving them alone otherwise. Is that right?

Comment author: shminux 17 September 2014 09:56:29PM 3 points [-]

I interpret the intention as "upvote serious ones you disagree with, downvote trolls, ignore those you agree with". In other words, you are not judging what you think LW finds contrarian, you are reporting whether you agree with the views posters perceive as contrarian, not penalizing people for misjudging what is contrarian.

Hopefully this thread is a useful tool for figuring out which views are the most out of the LW mainstream, but still are taken seriously by the community. 10+ upvotes would probably be in the ballpark.

Comment author: scientism 16 September 2014 08:35:45PM 32 points [-]

[Please read the OP before voting. Special voting rules apply.]

Superintelligence is an incoherent concept. Intelligence explosion isn't possible.

Comment author: D_Malik 16 September 2014 09:16:11PM 9 points [-]

How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.

What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".

As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.

Comment author: Lalartu 20 September 2014 02:58:44PM 1 point [-]

Well, I will predict this

would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer

Very bored Von Neumann.

if we selectively bred humans for intelligence for a couple million years

People that are very good at solving tests which you use to measure intelligence.

Comment author: spxtr 16 September 2014 04:46:44PM 34 points [-]

[Please read the OP before voting. Special voting rules apply.]

Feminism is a good thing. Privilege is real. Scott Alexander is extremely uncharitable towards feminism over at SSC.

Comment author: Larks 19 September 2014 12:36:57AM 4 points [-]

According to the 2013 LW survey, the when asked their opinion of feminism, on a scale from 1 (low) to 5 (high), the mean response was 3.8 , and social justice got a 3.6. So it seems that "feminism is a good thing" is actually not a contrarian view.

If I might speculate for a moment, it might be that LW is less feminist that most places, while still having an overall pro-feminist bias.

Comment author: epursimuove 26 September 2014 03:00:24AM 1 point [-]

If by most places you're talking about the world (or Western/American world) in general, that's pretty clearly false. The considerable majority of Americans reject the feminist label, for example. If you're talking about internet communities with well-educated members, then it probably is true.

Comment author: Ronak 17 September 2014 02:36:52AM *  9 points [-]

Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)

(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)

Comment author: gattsuru 19 September 2014 04:40:17PM 3 points [-]

Do you mind telling me how you think he's being uncharitable?

For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men.

I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves.

The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the .

But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst.

I'm not sure he /needs/ to be charitable, again -- feminism should have its own internal speakers, I think mainstream modern feminism could use better critics than whoever's on Fox News next, so on -- but it's an understandable criticism.

((Upvoting the thread starter, but more because one and two are mu statements; either closed questions or not meaningful. Weakly agree on third.))

Comment author: Jiro 21 September 2014 05:16:39AM 1 point [-]

Being 5% of the group doesn't mean they are 5% of the influence. The loudest 5% may get to set the agenda of the remaining 95% if the remaining ones are willing to go along with things they don't particularly care about, but don't oppose enough to make these things deal-breakers either.

Comment author: Azathoth123 21 September 2014 08:43:06PM 2 points [-]

It also helps if the 5% have arguments for their positions.

Comment author: spxtr 17 September 2014 02:53:45AM 5 points [-]

Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.

Comment author: Azathoth123 17 September 2014 01:43:52AM *  7 points [-]

How would you define "privilege"?

Comment author: IlyaShpitser 17 September 2014 01:51:16AM *  11 points [-]

Easier difficulty setting for your life in some context through no fault or merit of your own.

Comment author: TheAncientGeek 17 September 2014 03:50:49PM 1 point [-]

You really need riders to the effect that privilege of an objectionable kind is unrelated to achievement or intrinsic abilities,

Comment author: Azathoth123 18 September 2014 12:43:03AM 3 points [-]

The problem is that most of the examples SJW object to are in fact related to achievement or intrinsic abilities.

Comment author: Azathoth123 17 September 2014 02:01:33AM 9 points [-]

So would you describe someone tall as having "height privilege" because they're better at basketball?

Comment author: Prismattic 17 September 2014 05:38:40AM *  25 points [-]

I'd argue that height privilege (up to a point, typically around 6'6") is a real thing, having nothing to do with being good at sports. There is a noted experiment, which my google-fu is currently failing to turn up, in which participants were shown a video of an interview between a man and a woman. In one group, the man was standing on a footstool behind his podium, so that he appeared markedly taller than the woman. In the other group, the man was standing in a depression behind his podium, so t that he appeared shorter. The content of the interview was identical.

Participants rated the man in the "taller" condition as more intelligent and more mature than the same man in the "shorter" condition. That's height privilege.

Comment author: jkaufman 17 September 2014 11:50:20PM 6 points [-]

There's also a large established correlation between height and income, though not enough to completely rule out a potential common cause like "good genes" or childhood nutrition.

Comment author: spxtr 17 September 2014 02:41:36AM 3 points [-]

This is a good definition. In particular, "Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group, who are usually unaware of the privilege they possess. ... A privileged person is not necessarily prejudiced (sexist, racist, etc) as an individual, but may be part of a broader pattern of *-ism even though unaware of it."

No, this is not a motte.

Comment author: ChristianKl 19 September 2014 09:03:35PM 4 points [-]

Why focus only specific majority groups and thereby ignore things like men in domestic violence issues getting a lot less help from society than women?

Nearly everyone has some advantages and disadvantages. It's often not helpful to conflate that huge back of advantages and disadvantages into a single variable.

Comment author: shminux 17 September 2014 05:14:27PM *  8 points [-]

Why the "majority group" qualifier? Privilege has been historically associated with minorities, like aristocracy.

Comment author: Azathoth123 17 September 2014 03:02:05AM 8 points [-]

Anti-oppressionists use "privilege" to describe a set of advantages (or lack of disadvantages) enjoyed by a majority group

Does it have to be a majority group? For example, does this compared with this count as an example of "black privilege"? Would you describe the fact that some people are smarter (or stronger) than others as "intelligence privilege" (or "strength privilege")?

Comment author: Prismattic 17 September 2014 05:33:32AM 4 points [-]

That's in the bailey, because of "enjoyed by a majority group."

Comment author: VAuroch 17 September 2014 04:14:11AM *  3 points [-]

Like a few others, I agree with the first two but emphatically disagree with the last. And if you were right about it, I'd expect Ozy to have taken Scott to task about it, and him to have admitted to being somewhat wrong and updated on it.

EDIT: This has, in fact, happened.

Comment author: whales 17 September 2014 09:20:52AM *  5 points [-]

See this tumblr post for an example of Ozy expressing dissatisfaction with Scott's lack of charity in his analysis of SJ (specifically in the "Words, Words, Words" post). My impression is that this is a fairly regular occurrence.

You might be right about him not having updated. If anything it seems that his updates on the earlier superweapons discussion have been reverted. I'm not sure I've seen anything comparably charitable from him on the subject since. I don't follow his thoughts on feminism particularly closely, so I could easily be wrong (and would be glad to find I'm wrong here).

Comment author: VAuroch 17 September 2014 08:37:59PM 6 points [-]

OK, those things have indeed happened, to some degree. Above comment corrected.

I still don't understand what is uncharitable about the Wordsx3 post specifically. It accurately describes the behavior of a number of people I know (as in, have met, in person, and interacted with socially, in several cases extensively in a friendly manner), and I have no reason to consider them weak examples of feminist advocacy and every reason to consider typical (their demographics match the stereotype). I have carefully avoided catching the receiving end of it, because friends of mine have honestly challenged aspects of this kind of thing and been ostracized for their trouble.

Comment author: [deleted] 17 September 2014 06:14:22PM *  2 points [-]

There's something wrong with the first link (I guess you typed the URL on a smartphone autocorrecting keyboard or similar).

EDIT: I think this is the correct link.

Comment author: whales 17 September 2014 06:27:23PM 2 points [-]

Yeah, that happened when I edited a different part from my phone. Thanks, fixed.

Comment author: shminux 16 September 2014 08:26:42PM *  11 points [-]

Yes, Yes, No. Still upvoting, because "Scott Alexander" and "uncharitable" in the same sentence does not compute.

Comment author: spxtr 16 September 2014 10:09:45PM 7 points [-]

I consider him a modern G.K. Chesterton. He's eloquent, intelligent, and wrong.

Comment author: Prismattic 17 September 2014 01:31:02AM 2 points [-]

I agree with claim 1 for some definitions of feminism and not for others. I agree with claim 2. I think that Scott would agree wtih claim 1 (for some definitions) and with claim 2 as well, so I disagree with claim 3.

Comment author: Jiro 16 September 2014 06:45:33PM 2 points [-]

Can you defend these statements?

Comment author: spxtr 16 September 2014 08:15:27PM 4 points [-]

I can, but I don't want to fall into that inferential canyon.

Comment author: Coscott 20 September 2014 05:35:14PM 1 point [-]

I think that if you actually can defend them, it might be worth it to go through the canyon. Inferential canyons are a lot easier to cross when your targets are aware of their existence and are willing and able to discuss responsibly.

("worth it" is of course relative to other ways you discuss with strangers on the internet}

Comment author: moridinamael 16 September 2014 04:43:53PM 29 points [-]

[Please read the OP before voting. Special voting rules apply.]

Buying a lottery ticket every now and then is not irrational. Unless you have thoroughly optimized the conversion of every dollar you own into utility-yielding investments and expenses, the exposure to large positive tail risk netted by spending a few dollars on lottery tickets can still be rational.

Phrased another way, when you buy a lottery ticket you aren't buying an investment, you're buying a possibility that is not available otherwise.

Comment author: DanielLC 18 September 2014 02:49:46AM 2 points [-]

If one lottery ticket is worth while, why not two? Are you assigning a nonlinear value to the probability of winning the lottery? That causes a number of problems.

Comment author: moridinamael 18 September 2014 01:42:00PM *  3 points [-]

At the risk of looking even more like an idiot: Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. The Powerball gets as high as $590,500,000 pretax. NOT buying that one ticket gives you a chance of zero. So buying one ticket is "infinitely" better than buying no tickets. Buying more than one ticket, comparably, doesn't make a difference.

I like to play with the following scenario. A LessWrong reader buys a lottery ticket. They almost certainly don't win. They have one dollar less to donate to MIRI and because they're not wealthy they may not have enough wealth to psychologically justify donating anything to MIRI anyway. However, in at least one worldline, somewhere, they win a half a billion dollars and maybe donate $100,000,000 to MIRI. So from a global humanity perspective, buying that lottery ticket made the difference between getting FAI built and not getting it built. The one dollar spent on the ticket, in comparison, would have had a totally negligible impact.

I fully realize that the number of universes (or whatever) where the LessWrong reader wins the lottery is so small that they would be "better off" keeping their dollar according to basic economics, but the marginal utility of one extra dollar is basically zero.

edit: Digging myself in even deeper, let me attempt to simplify the argument.

You want to buy a Widget. The difference in net utility, to you, between owning a Widget and not owning a Widget is 3^3^3^3 utilons. Widgets cost $100,000,000. You have no realistic means of getting $100,000,000 through your own efforts because you are stuck in a corporate drone job and you have lots of bills and a family relying on you. So the only way you have of ever getting a Widget is by spending negligible amounts of money buying "bad" investments like lottery tickets. It is trivial to show that buying a lottery ticket is rational in this scenario: (Tiny chance) x (Absurdly, unquantifiably vast utility) > (Certain chance) x ($1).

Replace Widget with FAI and the argument may feel more plausible.

Comment author: warbo 22 September 2014 11:57:14AM 2 points [-]

Buying one $1 lottery ticket earns you a tiny chance - 1 in 175,000,000 for the Powerball - of becoming absurdly wealthy. NOT buying that one ticket gives you a chance of zero.

There are ways to win a lottery without buying a ticket. For example, someone may buy you a ticket as a present, without your knowledge, which then wins.

So buying one ticket is "infinitely" better than buying no tickets.

No, it is much more likely that you'll win the lottery by buying tickets than by not buying tickets (assuming it's unlikely to be gifted a ticket), but the cost of being gifted a ticket is zero, which makes not buying tickets an "infinitely" better return on investment.

Comment author: DanielLC 18 September 2014 04:34:09PM 5 points [-]

So buying one ticket is "infinitely" better than buying no tickets.

So your utility function is nonlinear with respect to probability. You don't use expected utility. It results in certain inconsistencies. This is discussed in the article the allais paradox, but I'll give a lottery example here.

Suppose I offer you a choice between paying one dollar and getting a one in a million chance of winning $500,000, and paying two dollars and getting a one in one million chance of winning $500,000 and a one in two million chance of winning $500,001. You figure that what's basically a 0.00015% chance of winning vs. a 0.0001% chance isn't worth paying another dollar for, so you just pay the one dollar.

On the other hand, suppose I only offer you the first option, but, once you see if you've won, you get another chance. If you win, you don't really want another lottery ticket, since it's not a big deal anymore. So you buy a ticket, and if you lose, you buy another ticket. This results in a 0.0001% chance of ending up with $499,999, a 0.00005% chance of ending up with $499,998, and a 99.99985% chance of ending up with -2$. This is exactly the same set of probabilities as you had for the second option before.

The one dollar spent on the ticket, in comparison, would have had a totally negligible impact.

No it would not. Or at least, it's highly unlikely for you to know that.

Suppose MIRI has their probability of success increased by 50 percentage points if they get a 100 million dollar donation. This means that, if 100 million people all donate a dollar, their probability of success goes up by 50 percentage points. Each successive one will change the probability by a different amount, but on average, each donation will increase the chance of success by one in 200 million. Furthermore, it's expected that the earlier donations would make a bigger difference, due to the law if diminishing returns. This means that donating one dollar improves MIRI's probability of success by more than one in 200 million, and is therefore better than getting a one in 100 million chance of donating 100 million dollars.

Even if MIRI does end up needing a minimum amount of money or something and becomes an exception to the law of diminishing returns, they know more about their financial situation, and since they're dealing with large amounts of money all at once, they can be more efficient about it. They can make a bet precisely tailored to their interests and with odds that are more fair.

Comment author: Lumifer 18 September 2014 03:24:56PM 4 points [-]

So buying one ticket is "infinitely" better than buying no tickets.

You're looking at the (potential) benefits and ignoring the costs. The costs are not negligible: "Thirteen percent of US citizens play the lottery every week. The average household spends around $540 annually on lotteries and poor households spend considerably more than the average." (source).

Buying more than one ticket, comparably, doesn't make a difference.

Buying a second ticket doubles your chances, obviously.

A LessWrong reader buys a lottery ticket ... in at least one worldline, somewhere, they win a half a billion dollars

For each timeline where you buy a lottery ticket there is one where you don't. Under MWI you don't make any choices -- you choose everything, always.

the marginal utility of one extra dollar is basically zero

You've never been poor, have you? :-/

It is trivial to show that buying a lottery ticket is rational in this scenario

It is just as trivial to show that you should spend all your disposable income and maybe more on lottery tickets in this scenario.

Comment author: moridinamael 18 September 2014 04:09:33PM 1 point [-]

You're looking at the (potential) benefits and ignoring the costs. The costs are not negligible: "Thirteen percent of US citizens play the lottery every week. The average household spends around $540 annually on lotteries and poor households spend considerably more than the average." (source).

I'm only commenting to the rationality of one individual buying one ticket, not the ethics of the existence of lotteries.

Buying a second ticket doubles your chances, obviously.

Buying one ticket takes you from zero to one, buying two tickets takes you from one to two. 1/0 = infinity, 2/1 = 2. Buying anything more than 1 ticket has sharply diminishing utility. I realize this is a somewhat silly line of argument, so I'm not going to sink any more energy defending it.

For each timeline where you buy a lottery ticket there is one where you don't. Under MWI you don't make any choices -- you choose everything, always.

I don't think we understand each other on this point. I was referring not to choosing, just winning. And the measure of the winning universes is a tiny fraction of all universes. But that doesn't matter when the utility of winning is sufficiently large. And the chance of a given individual buying a ticket isn't 50% in any meaningful quantum-mechanical sense, so "For each timeline where you buy a lottery ticket there is one where you don't" isn't true.

You've never been poor, have you? :-/

No, and I wouldn't recommend that a poor person buy lottery tickets. My original claim was that buying lottery tickets can be rational, not that it is rational in the general case.

It is just as trivial to show that you should spend all your disposable income and maybe more on lottery tickets in this scenario.

That's true. People also say that you should donate all your disposable income to MIRI, or to efficient charities, for exactly the same reasons, and I don't do those things for the same reason that I don't spend all my money on lottery tickets - I'm a human. My line of argument only applies when you want a Widget and have no other way of affording it.

I don't really feel strongly enough about this to continue defending it, it's just that I'm quite sure I'm right in the details of my argument and would welcome an argument that actually changes my mind / convinces me I'm wrong.

Comment author: Lumifer 18 September 2014 04:47:01PM 4 points [-]

<shrug>

I treat buying lottery tickets as buying a license to daydream. Once you realize you don't need a license for that... :-)

Comment author: Elo 16 September 2014 11:09:46PM 2 points [-]

disagree because the cost of the possibility is too high.

Comment author: Prismattic 17 September 2014 01:37:00AM *  1 point [-]

I agree with the first sentence, but I'm not sure if our reasoning is the same. Here's mine: If humans were perfectly rational overall, buying a lottery ticket would never make sense. But we aren't. I think it's rational to buy a lottery ticket say, every six months, and then not check if it's a winner for the six months. Just as humans seems to enjoy the anticipation of an upcoming vacation more than the actual vacation, the human brain can get utility from the hope that ticket might be a winner, and 6 months of an (irrational, but so what?) hope far outweigh the one day of disappointment and one dollar lost when you check the ticket and it hasn't won.

Comment author: blacktrance 16 September 2014 10:13:18PM *  7 points [-]

[Please read the OP before voting. Special voting rules apply.]

There is nothing morally wrong about eating meat, and vegetarianism/veganism aren't morally superior to meat-eating.

Comment author: Lumifer 16 September 2014 11:45:57PM 6 points [-]

That looks like a mainstream position, not contrarian.

Comment author: blacktrance 17 September 2014 12:07:17AM *  3 points [-]

It's contrarian among LWers, which is what the OP asked for.

Comment author: Lumifer 17 September 2014 04:03:36AM 7 points [-]

Is that so? I know there are some vocal vegetarians on LW, I am not sure that makes them the local mainstream.

Comment author: Prismattic 17 September 2014 05:20:15AM 7 points [-]

I think there are more LW members who are meat-eating and feel hypocritical/gulity about it than there are actual vegetarians.

Comment author: Lumifer 17 September 2014 02:44:47PM 7 points [-]

Looking at the 2013 poll:

VEGETARIAN:
No: 1201, 73.4%
Yes: 213, 13.0%
Did not answer: 223, 13.6%

I can't speak to the feeling of guilt, but vegetarians are a small minority here.

Comment author: Elo 16 September 2014 11:07:46PM 1 point [-]

agree (mostly), (not vegetarian) would you prefer to eat a bacterial-produced meat product? Assuming it could be made to taste the same...

Comment author: blacktrance 16 September 2014 11:38:44PM 4 points [-]

If its price was less than or equal to the price of normal meat, I'd buy it, otherwise, I'd stick with normal meat.

Comment author: Elo 17 September 2014 06:22:30AM 1 point [-]

I suspect it will end up being cheaper because it would be faster to produce than an entire life-cycle of an animal...

Comment author: moridinamael 16 September 2014 04:47:13PM 16 points [-]

[Please read the OP before voting. Special voting rules apply.]

Fossil fuels will remain the dominant source of energy until we build something much smarter than ourselves. Efforts spent on alternative energy sources are enormously inefficient and mostly pointless.

Related claim: the average STEM-type person has no gut-level grasp of the quantity of energy consumed by the economy and this leads to popular utopian claims about alternative energy.

Comment author: RomeoStevens 16 September 2014 06:47:30PM 4 points [-]

It isn't very hard to do a little digging here. http://en.wikipedia.org/wiki/Electricity_generation#mediaviewer/File:Annual_electricity_net_generation_in_the_world.svg

China's aggressive nuclear strategy seems reasonable.

Comment author: moridinamael 16 September 2014 08:20:47PM 6 points [-]

Not exactly sure what you mean by "digging." I already comprehend the quantities of energy being consumed because of my education and experience in related fields, it's the average person who I think does not, since I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction."

Also, your figure only reflects electricity generation, not total energy consumption which is a much higher figure. Currently non-hydrocarbon fuel sources for transportation is very fringe.

The truth is that the price of fossil fuels has always and will continue to fluctuate in accord with simple supply-demand economics for a long time to come; the cheaper it gets to make energy via alternative methods, the cheaper fossil fuels will become to undercut those alternative sources.

Comment author: ChristianKl 19 September 2014 08:03:22PM 1 point [-]

I hear them saying things about how a small increase in solar panel efficiency is going to completely and rapidly "cure us of our fossil fuel addiction."

We have roughly doubling in solar panel efficiency every 7 years. That's not what I would call "small increase".

Comment author: moridinamael 19 September 2014 08:13:25PM 2 points [-]

Even if solar panels were 100% efficient it would not change the overall picture very much. Solar panels are expensive and do not use space efficiently.

Comment author: ChristianKl 20 September 2014 01:31:57AM 1 point [-]

With efficiency I meant the amount you pay per kilowatt hour. It's a variable that has seen consistent doubling every 7 years over the last two decades.

Space on top of most buildings is unused and there are huge deserts that aren't used.

Comment author: Azathoth123 20 September 2014 01:58:30AM 1 point [-]

Does the include the subsidies many governments have been providing to solar?

Comment author: RomeoStevens 17 September 2014 12:55:38AM 8 points [-]

I looked through the numbers and the trend line. I updated in your direction. Even nuclear can't make a big dent without true mass production of reactors, which almost certainly will not happen.

Comment author: VAuroch 17 September 2014 05:13:52AM 1 point [-]

Provably-secure computing is undervalued as a mechanism for guaranteeing Friendliness from an AI.

Comment author: somnicule 17 September 2014 11:42:33AM 5 points [-]

I'm not sure what you mean by provably-secure, care to elaborate?

It sounds like it might possibly be required and is certainly not sufficient.

Comment author: TheMajor 16 September 2014 06:56:03PM 5 points [-]

[Please read the OP before voting. Special voting rules apply.]

Somewhere between 1950 and 1970 too many people started studying physics, and now the community of physicists has entered a self-sustaining state where writing about other people's work is valued much, much more than forming ideas. Many modern theories (string theory, AdS/CFT correspondence, renormalisation of QFT) are hard to explain because they do not consist of an idea backed by a mathematical framework but solely of this mathematical framework.

Comment author: lmm 17 September 2014 12:35:16PM 1 point [-]

Agree with the first half, disagree with the second

Comment author: Ixiel 16 September 2014 12:19:38PM *  11 points [-]

English has a pronoun that can be used for either gender and, as an accident of history not some hidden agenda, said pronoun in English is "he/him/&c."

Edited: VAuroch is the best kind of correct on "neuter" pronouns. Changed, though that might make a view less controversial than I thought (all but 2 readers agree, really?) even less so :)

Comment author: VAuroch 17 September 2014 04:45:24AM 3 points [-]

I consider this an incoherent claim. "A neuter pronoun", inherently, is one that can be applied to individuals regardless of gender (actual or grammatical). That's what people want when they wish English had a neuter pronoun. 'He/him/his' is not such a pronoun. "They/them/their" is.

Comment author: Ixiel 17 September 2014 11:05:58AM 6 points [-]

Nope. "Of all the men and women here, one will prove his worth" is grammatical and does not imply a man IMO. I'm not defining myself right of course, just clarifying why my contrarian claim is coherent.

Comment author: VAuroch 18 September 2014 03:48:31AM 2 points [-]

That was historically true, but many women and nonbinary people disagree with the statement that it is still true. And it was never neuter; it used to be the case that using male pronouns for an unspecified person was grammatically valid.

Comment author: Ixiel 18 September 2014 10:55:35AM 1 point [-]

You are exactly right on technical use of "neuter." Fixed, and thank you.

What is a nonbinary person in the sense you are using it, apart from a subset of non-women? I can't get use from context. Just for curiosity, and probably off topic so pm if exactly one person cares.

If I thought everybody agreed it wouldn't be contrarian now would it?

Comment author: VAuroch 19 September 2014 05:03:47AM *  1 point [-]

Nonbinary people consider themselves neither male nor female, both male and female, male and female individually but at different times, or any other vector combination of genders besides {1,0} and {0,1}; naturally, they are all transgender. They're fairly uncommon, largely because the idea that identifying as nonbinary is not available to the vast majority of people and would be stigmatized if they did choose to adopt it.

Comment author: Ixiel 19 September 2014 11:38:58AM 1 point [-]

Huh, interesting. I had never heard of that, thank you.

Comment author: buybuydandavis 16 September 2014 07:51:43AM *  22 points [-]

[Please read the OP before voting. Special voting rules apply.]

Utilitarianism is a moral abomination.

Comment author: polymathwannabe 16 September 2014 02:49:31PM *  1 point [-]

I am very interested in this.

  • Exactly what is repugnant about utilitarianism? (Moi, I find that it leads to favoring torture over 3^^^3 specks, which is beyond facepalming; I'd like to hear your view.)

  • I guess the moral assumptions based on which you condem utilitarianism are the same you would propose instead. What moral theory do you espouse?

Comment author: buybuydandavis 19 September 2014 08:19:34AM *  13 points [-]

Exactly what is repugnant about utilitarianism?

It's inhuman, totalitarian slavery.

Islam and Christianity are big on slavery, but it's mainly a finite list of do's and don'ts from a Celestial Psychopath. Obey those, and you can go to a movie. Take a nap. The subjugation is grotesque, but it has an end, at least in this life.

Not so with utilitarianism. The world is a big machine that produces utility, and your job is to be a cog in that machine. Your utility is 1 seven billionth of the equation - which rounds to zero. It is your duty in life to chug and chug and chug like a good little cog without any preferential treatment from you, for you or anyone else you actually care about, all through your days without let.

And that's only if you don't better serve the Great Utilonizer ground into a human paste to fuel the machine.

A cog, or fuel. Toil without relent, or harvest my organs? Which is less of a horror?

Of course, some others don't get much better consideration. They, too, are potential inputs to the great utility machine. Chew up this guy here, spit out 3 utilons. A net increase in utilons! Fire up the woodchipper!

But at least one can argue that there is a net increase of utilons. Somebody benefited. And whatever your revulsion at torture to avoid dust specks, hey, the utilon calculator says it's a net plus, summed over the people involved.

No, what I object to is having a believer who reduces himself to less than a slave, to raw materials for an industrial process, held up as a moral ideal. It strikes me as even more grotesque and more totalitarian than the slavery lauded by the monotheisms.

Comment author: gjm 19 September 2014 05:10:13PM 5 points [-]

I disagree, but my reasons are a little intricate. I apologize, therefore, for the length of what follows.

There are at least three sorts of questions you might want to use a moral system to answer. (1) "Which possible world is better?", (2) "Which possible action is better?", (3) "Which kind of person is better?". Many moral systems take one of these as fundamental (#1 for consequentialist systems, #2 for deontological systems, #3 for virtue ethics) but in practice you are going to be interested in answers to all of them, and the actual choices you need to make are between actions, not between possible worlds or characters.

Suppose you have a system for answering question 1, and on a given occasion you need to decide what to do. One way to do this is by choosing the action that produces the best possible world (making whatever assumptions about the future you need to), but it isn't the only way. There is no inconsistency in saying "Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I'm going to do Y instead"; that just means that you care about other things besides morality. Which pretty much everyone does.

(The same actually applies to systems that handle question 2 more directly. There is no inconsistency in saying "The gods have commanded that we do X, but I am going to do Y instead because it's easier". Though there might be danger in it, if the gods are real.)

Many moral systems have the property that if you follow them and care about nothing but morality then your life ends up entirely governed by that system, and your own welfare ends up getting (by everyday standards) badly neglected. If this is a problem, it is a problem with caring about nothing but morality, not a problem with utilitarianism or (some sorts of) divine command theory or whatever.

A moral system can explicitly allow for this; e.g., a rule-based system that tells you what you may and must do can simply leave a lot of actions neither forbidden nor compulsory, or can command you to take some care of your own welfare. A consequentialist system can't do this directly -- what sort of world is better shouldn't depend on who's asking, so if you decide your actions solely by asking "what world is best?" you can't make special allowances for your own interest -- but so what? You can take utilitarianism as your source of answers to moral questions, and then explicitly trade off moral considerations against your own interests in whatever way you please. (And utilitarianism doesn't tell you you mustn't. It only tells you that if you do that you will end up with a less-than-optimal world, but you knew that already.)

A utilitarian doesn't have to see their job as being a cog in the Great Utility Machine of the world. They can see their job however they please. All that being a utilitarian means is that when they come to ask a moral question, looking at the consequences and comparing utility is how they do it. Whether they then go ahead and maximize utility is a separate matter.

So, how should a utilitarian look at someone who cares about nothing but (utilitarian) morality -- as a "moral ideal" or a grotesquely subjugated slave or what? That's up to them, and utilitarianism doesn't answer the question. (In particular, I'm not aware of any reason to think that considering such a person a "moral ideal" is a necessary part of maximizing utility.) It might, I suppose, be nice to have a moral system with the property that a life that's best-according-to-that-system is attractive and nice to think about; but it would also be nice to have a physical theory with the property that if it's true then we all get to live happily for ever, and a metaphysics with the property that it confirms all our intuitions about the universe; and, in each case, so we can but adopting those theories probably won't work out well. Likewise, I suggest, for morality.

As for your rhetoric about machines and industrial processes: I don't think "large-scale" is at all the same thing as "industrial". Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost. Now expand this person's awareness and compassion to encompass everyone in the world. What you get is pretty close to the "grotesquely subjugated" utilitarian saint, but there's nothing machine-like or industrial about them: they do what they do out of an intensely personal awareness of everyone's welfare or suffering. Their life might still be subjugated or grotesque, but that has nothing to do with industrial machinery.

You might want to protest that I'm cheating: that it's wrong to call someone a utilitarian if they consider anything other than utility when making decisions. I think this would be a bit like some theists' insistence that no one can properly be called an "atheist" if they admit that slightest smidgeon of doubt about the existence of deities. And I respond in roughly the same way in this case as in the other: You may use the words however you please, but if you restrict the word "utilitarian" to those who are completely singleminded about morality, you end up with hardly anyone coming under that description, and for consistency you should do the same for every other moral system out there, and you end up having a single big bucket of not-completely-singleminded people into which just about everyone goes. Isn't it better to classify people in a way that better matches the actual distribution of beliefs and attitudes, and say that someone is a utilitarian if they answer "what's morally better?" questions by some kind of consideration of overall utility?

Comment author: buybuydandavis 20 September 2014 08:52:15AM 4 points [-]

Lots to comment on here. That last paragraph certainly merits some comment.

Yes, most people are almost entirely inconsistent about the morality they profess to believe. At least in the "civilized world". I get the impression of more widespread fervent and sincere beliefs in the less civilized world.

Do Christians in the US really believe all their rather wacky professions of faith? Or even the most tame, basic professions of faith? Very very few, I think. There are Christians who really believe, and I tend to like them, despite the wackiness. Honest, consistent, earnest people appeal to me.

For the great mass, I increasingly think they just make talking noises appropriate to their tribe. It's not that they lie, it's more that correspondence to reality is so far down the list of motivations, or even evaluations, that it's not relevant to the noises that come from their mouths.

It's the great mass of people who seem to instinctively say whatever is socially advantageous in their tribe that give be the heebie jeebies. They are completely alien - which, given the relative numbers, means I am totally alien. A stranger in a strange land.

Isn't it better to classify people in a way that better matches the actual distribution of beliefs and attitudes

Yes.

and say that someone is a utilitarian if they answer "what's morally better?" questions by some kind of consideration of overall utility?

That's what the tribesman do, for the purposes of tribesman.

For the purposes of judging an ideology, which I had done, my judgment is based on what it would mean for people to actually adhere to the ideology, and not just make noises that they believe it.

For a number of purposes, knowing who has allegiance to what tribe matters. I don't find the utilitarian tribe here morally abominable, but I do think preaching the faith they do is harmful, and I wish they'd knock it off, as I wish people in general would stop preaching all the various obscenities that they preach.

Then again, what does a Martian know about what is harmful for Earthlings?

Other issues.

"Doing X will lead to a better world, but I care about my own happiness as well as about optimizing the world so I'm going to do Y instead"

Not utilitarianism. In utilitarianism, your happiness and welfare counts 1 seven billionth - that's not even a rounding error, it's undetectable.

Imagine, if you will, someone who would by admired by the Buddhist or Christian moral traditions, who is filled with love and compassion for everyone s/he sees and works hard to make their lives better even at great personal cost.

I've always found statements like this tremendously contradictory.

If he's really so filled with love for other people, why is helping them "a great personal cost", and not a great personal benefit? Me, I enjoy being useful, particularly to people I care about. Helping them is an opportunity, not a cost.

There is no inconsistency in saying "The gods have commanded that we do X, but I am going to do Y instead because it's easier".

What is there, for a supposed believer, is disobedience and sin. You seem tremendously cavalier about violating your professed moral code. Which, given your code, is probably a good thing, though my preference is for people to profess a decent faith that they actually follow, rather than an abomination that they don't.

Comment author: gjm 20 September 2014 11:09:39PM 2 points [-]

Not utilitarianism.

I'm repeating myself here, but: I think you are mixing up two things: utilitarianism versus other systems, and singleminded caring about nothing but morality versus not. It is the latter that generates attitudes and behaviour and outcomes that you find so horrible, not the former.

You are of course at liberty to say that the term "utilitarian" should only be applied to a person who not only holds that the way to answer moral questions is by something like comparison of net utility, but also acts consistently and singlemindedly to maximize net utility as they conceive it. The consequence, of course, will be that in your view there are no utilitarians and that anyone who identifies as a utilitarian is a hypocrite. Personally, I find that just as unhelpful a use of language as some theists' insistence that "atheist" can only mean someone who is absolutely 100% certain, without the tiniest room for doubt, that there is no god. It feels like a tactical definition whose main purpose is to put other people in the wrong even before any substantive discussion of their opinions and actions begins.

why is helping them "a great personal cost", and not a great personal benefit?

It's both. (Just as a literal purchase may be both at great cost, and of great benefit.) Which is one reason why, if this person -- or someone who feels and acts similarly on the basis of utilitarian rather than religious ethics -- acts in this way because they genuinely think it's the best thing to do, then I don't think it's appropriate to complain about how grotesquely subjugated they are.

given your code

What do you believe my code to be, and why?

Comment author: polymathwannabe 19 September 2014 12:53:40PM 1 point [-]

That was beautiful.

Comment author: pianoforte611 18 September 2014 09:38:00PM *  5 points [-]

Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons.

Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being - unless you are barely subsisting, there is someone who would benefit from your wealth more than you).

Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.

Comment author: MaximumLiberty 16 September 2014 02:38:58AM 37 points [-]

[Please read the OP before voting. Special voting rules apply.]

As a first approximation, people get what they deserve in life. Then add the random effects of luck.

Max L.

Comment author: DanielLC 16 September 2014 04:48:18AM *  25 points [-]

Why do Africans deserve so much less than Americans? Why did people in the past deserve so much less than current people? Why do people with poor parents deserve less than people with rich parents?

Comment author: MaximumLiberty 17 September 2014 05:30:20PM 5 points [-]

I count "the circumstances into which you are born" as luck. I'd guess it is the biggest component of luck, along with being struck by a disabling genetic condition or exposed to pandemic. So, the first observation has more salience in similar groups of people. So, for example, the group of people that I hang out with or work with are roughly similar enough for desert to have more salience than luck.

But perhaps that means that birth-luck should be the first approximation, then desert, then additional luck.

Max L.

Comment author: DanielLC 17 September 2014 06:18:15PM 5 points [-]

Can you give me an example of something that is neither desert nor luck?

Comment author: MaximumLiberty 18 September 2014 02:48:20AM 3 points [-]

Very nice question; better, in fact than the statement to which you responded.. Examples I have in mind: - Personal level injustice. - Social injustice. - How other people treat you.

But my primary point was whether things for which we are personally responsible is a bigger or lesser influence than luck. That is, if I am guessing with little knowledge, I am going to guess desert before luck for most groups with which I'd be interacting.

(Also, I am thinking that variation in luck, when the fact of variation if predictable and bad luck can be insured or mitigated, is desert, not luck.)

Particular applications might make it more clear. If you don't have a job in America, and you appear physically able to work, my first guess is that you are the biggest contributor to your unemployment. If you are unhealthy in America, and weren't born with it, my first approximation will be that you contributed mightily to your poor health. And so on.

Max L.

Comment author: DanielLC 18 September 2014 03:01:42AM 1 point [-]

If you fail to buy car insurance, you deserve the expected cost?

I was thinking deserving something bad meant you did something bad, not that you did something stupid.

When you say "deserve," do you mean to imply that it is terminally better for people who deserve more to get more, and people who deserve less to get less?

Comment author: MaximumLiberty 18 September 2014 05:12:56AM 4 points [-]

If you fail to buy auto liability insurance and cause an accident (which is entirely predictable over long periods), then my first guess is that you deserve the impoverishment that comes from the situation.

If you fail to buy uninsured motorist insurance and are in an accident that you don't cause (which is entirely predictable) and faulty driver has no insurance and can't pay (which is also entirely predictable), then my first approximation is still pretty good. It is a little off because you could be beset with e string of bad luck.

I think of it the other way around. If I see someone happy and reasonably well off, I am first going to say that they had a hand in it. If I see someone continually unhappy or impoverished (setting aside birth luck), my first guess is also going to be that they are mainly responsible for their own outcomes. Turning it round, they are usually getting what they deserve.

Whether that is better or not depends on more than individual morality, so no, I'm not saying it is better.

Also, the examples seem to have focused on material outcomes, since they are easier to talk about, but I'm also thinking of non-material things. Relationships, self-esteem, etc.

Max L.

Comment author: polymathwannabe 16 September 2014 03:14:15AM 3 points [-]

What ethical theory are you using for your definition of "deserve"?

Comment author: MaximumLiberty 16 September 2014 04:05:05AM 3 points [-]

It is a fine question, since the word "deserve" is the link between an observation and a judgment about the person. I don't think I need an answer to it to make the observation that most people here don't hold that view. Which is a good thing, because I don't think I have a satisfactory answer beyond rough moral intuition.

Max L.

Comment author: blacktrance 15 September 2014 07:20:31PM *  44 points [-]

[Please read the OP before voting. Special voting rules apply.]

Human value is not complex, wireheading is the optimal state, and Fun Theory is mostly wrong.

Comment author: VAuroch 17 September 2014 07:44:16PM 1 point [-]

What would you have to see to convince you otherwise?

Comment author: lmm 15 September 2014 09:16:54PM 25 points [-]

[Please read the OP before voting. Special voting rules apply.]

The dangers of UFAI are minimal.

Comment author: [deleted] 17 September 2014 04:27:33PM 2 points [-]

“Dangers” being defined as probability times disutility, right?

Comment author: lmm 17 September 2014 11:27:23PM 4 points [-]

With the caveat that I'm treating unbounded negative utility as invalid, sure.

Comment author: DanielLC 15 September 2014 11:16:43PM 5 points [-]

Do you think that it is unlikely for a UFAI to be created, that if a UFAI is created it will not be dangerous, or both?

Comment author: lmm 16 September 2014 12:11:35PM 2 points [-]

I think humans will become sufficiently powerful that UFAI does not represent a threat to them before creating UFAI.

Comment author: jsteinhardt 15 September 2014 05:36:36PM 46 points [-]

[Please read the OP before voting. Special voting rules apply.]

The replication initiative (the push to replicate the majority of scientific studies) is reasonably likely to do more harm than good. Most of the points raised by Jason Mitchell in The Emptiness of Failed Replications are correct.

Comment author: Osuniev 21 September 2014 11:05:54AM 6 points [-]

I read this trying to keep as open a mind as possible, and I think there is SOME value to SOME of what he said (ie no two experiments are totally the same and replicators often are motivated to prove the first study wrong)... But one thing that really set me off is that he genuinely considers a study that doesn't prove its hypothesis as a failure, not even acknowledging that IN PRINCIPLE, this study has proven the hypothesis wrong, which is valuable knowledge all the same.

Which is so jarring with what I consider the very basis of science that I find difficult to take Mitchell seriously.

Comment author: cousin_it 19 September 2014 10:17:25AM *  12 points [-]

Imagine a physicist arguing that replication has no place in physics, because it can damage the careers of physicists whose experiments failed to replicate! Yet that's precisely the argument that the article makes about social psychology.

Comment author: MaximumLiberty 16 September 2014 02:39:23AM 8 points [-]

[Please read the OP before voting. Special voting rules apply.]

The SF Bay Area is a lousy place to live.

Max L.

Comment author: John_Maxwell_IV 16 September 2014 03:50:53AM *  6 points [-]

This seems pretty similar to the irrationality game. That's not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):

  • Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.

  • Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy "contrarian" views that are in fact pretty widespread already.

  • Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let's reward that.

  • Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).

  • Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.

Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil's advocate be allowed, i.e. presenting novel arguments for unpopular positions you don't actually agree with, and in under what circumstances (are disclaimers required, etc.)

You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a "contrarian open thread" that operates under those rules.

Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.

Comment author: Azathoth123 16 September 2014 04:40:54AM 4 points [-]

Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone's position flatly stated doesn't usually improve the accuracy of my beliefs.

This has the problem that beliefs with a large inferential distance won't get stated.

The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.

Comment author: John_Maxwell_IV 16 September 2014 05:06:46AM *  2 points [-]

This has the problem that beliefs with a large inferential distance won't get stated.

Is it useful to have beliefs with a large inferential distance stated without supporting evidence? Given that the inferential distance is large, I'm not going to be able to figure it out on my own am I? At least having a sketch of an argument would be useful. The more you fill in the argument, the more minds you change and the more upvotes you get.

The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.

"Upvote if the comment caused you to change your mind" is not the same thing as "upvote if you disagree".

Another idea, which kinda seems to be getting adopted in this thread already: have a short note at the bottom of every comment right above the vote buttons reminding people of the voting behavior for the thread, to counteract instinctive voting.

Comment author: fubarobfusco 16 September 2014 07:20:39AM *  3 points [-]

[Please read the OP before voting. Special voting rules apply.]

Improving the typical human's emotional state — e.g. increasing compassion and reducing anxiety — is at least as significant to mitigating existential risks as improving the typical human's rationality.

The same is true for unusually intelligent and capable humans.

For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.

(Of course, there are cases where failures of rationality and failures of compassion coincide — the fundamental attribution error, for instance. It seems to me that attacking these problems from both System 1 and System 2 will be more effective than either approach alone.)

Comment author: shminux 15 September 2014 03:44:01PM 47 points [-]

There is no territory, it's maps all the way down.

Comment author: D_Malik 20 September 2014 04:33:23AM 3 points [-]

Can you unpack this? At the moment it seems nonsensical, in a "throwing together random words and hoping people read profound insights into it" way.

Comment author: shminux 20 September 2014 07:24:56AM 4 points [-]

Sure. Have you actually seen "the territory"? Of course not. There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption. It works well in many cases. But it is still an assumption, a model. To quote Brienne Strohl on noticing:

You're unlikely to generate alternative hypotheses when the confirming observation and the favored hypothesis are one and the same in your experience of experience.

To most people the map/territory observation is such a "one and the same". I'm suggesting that it's only a hypothesis. It gives way when making a map changes the territory (hello, QM). It is also unnecessary, because the useful essence of the map/territory model is that "future is partially predictable", in a sense that it is possible to take our past experiences, meditate on it for a while, figure out what to expect in the future and see our expectations at least partially confirmed. There is no need to attach the notion of some objective reality causing this predictability, though admittedly it does feel good to pretend that we stand on a solid ground, and not on some nebulous figment of imagination.

If you extract this essence, that future experiences are predictable from the past ones, and that we can shape our future experiences based on the knowledge of the past, it is enough to do science (which is, unsurprisingly, designing, testing and refining models). There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there. And then those small things became gateways to more surprising observations.

Yet we persist in thinking that there are ultimate laws of the universe, and that some day we might discover them all. I posit that there are no such laws, and we will continue digging deeper and deeper, without ever reaching the bottom... because there is no bottom.

Comment author: D_Malik 20 September 2014 10:33:16PM 1 point [-]

Thanks for explaining, upvoted. But I still don't see how this could possibly make sense.

There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there.

But our models have become more accurate over time. We've become, if you will, "less wrong". If there's no territory, what have we been converging to?

Have you actually seen "the territory"? Of course not.

...Yes? I see it all the time.

There are plenty of unexplained observations out there. We assume that these come from some underlying "reality" which generates them. And it's a fair assumption.

I seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me. If it's just maps generating our observations, I'd call the maps part of the territory. (Like a map with a picture of the map itself on the territory. Except, in your world, I guess, there's no territory to chart so the map is a map of itself.) This feels like arguing about definitions.

I see how this might sorta make sense if we postulate that the Simulator Gods are trying really hard to fuck with us. Though still, in that case, I think the simulating world can be called a territory.

Comment author: shminux 21 September 2014 05:29:37AM *  1 point [-]

But our models have become more accurate over time.

Indeed they have. We can predict the outcome of future experiments better and better.

We've become, if you will, "less wrong".

Yep.

If there's no territory, what have we been converging to?

Why do you think we have been converging to something? Every new model asks generates more questions than it answers. Sure, we know now why emitted light is quantized, but we have no idea how to deal, for example, with the predicted infinite vacuum energy.

...Yes? I see it all the time.

No, you really don't. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.

: I seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me.

It is not a definition, it's a hypothesis. At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.

If it's just maps generating our observations, I'd call the maps part of the territory.

First, I made no claims that maps generate anything. maps are what we use to make sense of observations. Second, If you define the territory the usual way, as "reality", then of course maps are part of the territory, everything is.

in your world, I guess, there's no territory to chart so the map is a map of itself.)

Not quite. You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs. This does not have to end.

This feels like arguing about definitions.

I realize that is how you feel. The difference is that if the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory (possibly at the price of duplicating the territory and calling it a map). I am not convinced that it is a good assumption. Quite the opposite, our experience shows that it is a bad one, it has been falsified time and again. And bad models should be discarded, no matter how comforting they may be.

Comment author: TheAncientGeek 22 September 2014 01:24:20PM 1 point [-]

Why do you think we have been converging to something? 

What is the point of science, otherwise? Better prediction of observations? But you can't explain what an observantion is.

If the territory theory is able to explain the purpose of science, and the no-territory theory is not , the territory theory is better.

What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.

..according to a map which has "inputs from the territory" marked on it.

seem to recall someone (EY?) defining "reality" as "that which generates our observations". Which seems like a fairly natural definition to me.It is not a definition, it's a hypothesis.

At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.

Well, you need to. If the territory theory can explain the very existence of observations, and the no-territory theory cannot, the territory theory is better,

You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs.

Inputs from where?

The difference is that [if] the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory

No it doesn't. "The territory exists, but is not perfectly mappable" is a coherent assumption, particularly in view if the definition of the territory as the source of observations.

Comment author: hyporational 17 September 2014 04:45:05PM 8 points [-]

There are no maps, it's reality all the way up.

Comment author: shminux 17 September 2014 04:55:59PM 2 points [-]

You might be facetious, but I suspect that it is another way of saying the same thing.

Comment author: TheAncientGeek 20 September 2014 02:28:50PM *  2 points [-]

I suspect it isn't.

The words map and territory aren't relative terms like up and down.

Comment author: hyporational 17 September 2014 04:58:48PM *  1 point [-]

I meant to communicate the latter. We share this view.

Comment author: DanielLC 15 September 2014 11:20:54PM 9 points [-]

"The territory" is just whatever exists. It may well be an infinite series of entities, each more refined than the last. It's still a territory.

If there is no territory, what is a map?

Comment author: Salemicus 15 September 2014 01:08:05PM 64 points [-]

Dualism is a coherent theory of mind and the only tenable one in light of our current scientific knowledge.

Comment author: TheAncientGeek 17 September 2014 03:43:44PM 3 points [-]

Which dualism?

Comment author: DanielLC 16 September 2014 12:00:33AM 5 points [-]

Do you mean that, without strong evidence that we don't have, we should assume dualism, or that we have strong evidence for dualism?

If it's the second one, can you give me an example of such a piece of evidence?

Comment author: Salemicus 16 September 2014 11:22:07AM 1 point [-]

The second position.

An example of the evidence is the two-way causal connection between your inner subjective experiences and the external universe.

Comment author: polymathwannabe 16 September 2014 01:42:39PM 8 points [-]

How is that better explained by dualism?

Comment author: TheAncientGeek 17 September 2014 03:44:54PM 4 points [-]

Indeed. Two way interaction uis as well or better explained by physicalism.

Comment author: lmm 15 September 2014 09:21:36PM 14 points [-]

[Please read the OP before voting. Special voting rules apply.]

The notion of freedom is incoherent. People would be better off abandoning the pursuit of it.

Comment author: lmm 15 September 2014 09:23:51PM 12 points [-]

[Please read the OP before voting. Special voting rules apply.]

An AI which followed humanity's CEV would make most people on this site dramatically less happy.

Comment author: jsteinhardt 15 September 2014 05:37:34PM 23 points [-]

[Please read the OP before voting. Special voting rules apply.]

For many smart people, academia is one of the highest-value careers they could pursue.

Comment author: Jiro 15 September 2014 07:58:09PM 13 points [-]

Roko's Basilisk legitimately demonstrates a problem with LW. "Rationality" that leads people to believe such absurd ideas is messed up, and 1) the presence of a significant number of people psychologically affected by the basilisk and 2) the fact that Eliezer accepts that basilisk-like ideas can be dangerous are signs that there is something wrong with the rationality practiced here.

Comment author: cousin_it 19 September 2014 10:37:23AM *  1 point [-]

If you want to point out LW beliefs that sound crazy to most people, I guess you don't need to go as far as Roko's basilisk. FAI or MWI would suffice.

Comment author: Emile 16 September 2014 09:06:04PM 5 points [-]

the presence of a significant number of people psychologically affected by the basilisk

Does "rolling my eyes and reading something else" count as "psychologically affected"?

Comment author: polymathwannabe 17 September 2014 01:47:01AM 1 point [-]

May I suggest reading Singularity Sky by Charles Stross, which has precisely such a menacing future AI as an antagonist? (Spoiler: no basilisk memes involved in the plot; they're obviously not obvious to everyone who thinks of this scenario.)

Comment author: Sarunas 16 September 2014 11:54:10AM *  7 points [-]

"Rationality" leads people to believe such absurd ideas

Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions. E.g. G.K.Chesterton would probably blame atheism[1]. Zooming out even more, for example, someone immersed in Eastern thought might even blame Western thought in general. Despite receiving vastly disproportionate share of media attention it was such a small part of LessWrong history and thought (by the way, is anything that any LWer ever came up with a part of LW thought?) that it seems to wrong to put the blame on LessWrong or rationality in general.

Furthermore, which would you say is better, an ability to formulate an absurd idea and then find its flaws (or, for e.g. mathematical ideas, exactly under what strange conditions they hold) or inability to formulate absurd ideas at all? Ability to come up with various absurd ideas is an unavoidable side effect of having an imagination. What is important is not to start believing it immediately, because in the history of any really new and outlandish idea at the very beginning there is an important asymmetry (which arises due to the fact that coming up with any complicated idea takes time) - an idea itself has already been invented but the good counterarguments do not yet exist (this is similar to the situation where a new species is introduced to an island where it does not have natural predators, which are introduced only later). This also applies to the moment when a new outlandish idea is introduced to your mind and you haven't heard any counterarguments by that moment, one must nevertheless exercise caution. Especially if that new idea is elegant and thought provoking whereas all counterarguments are comparatively ugly and complicated and thus might feel unsatisfactory even after you have heard them.

the presence of a significant number of people psychologically affected

Was there really a significant number of people or is this just, well, an urban legend? The fact that some people are affected is not particularly surprising - it seems to be consistent with the existence of e.g. OCD. Again, one must remember that not everyone thinks the same way and the common thing between people affected might have been something other than acquaintance with LW and rationality which you seem to imply (correct me if my impression was wrong).

the fact that Eliezer accepts that basilisk-like ideas can be dangerous

I think it is better to give Eliezer a chance to explain himself why he did what he did. My understanding is that whenever someone introduces a person to new variant of this concept without explaining proper counterarguments it takes time for that person to acquaint themselves with them. In very specific instances that might lead to unnecessary worrying about it and potentially even some actions (most people would regard this idea as too outlandish and too weird whether or not it was correct and compartmentalize everything even if it was). A clever devil's advocate could potentially come up with more and more elaborate versions of this idea which take more and more time to take down. As you can see, it is not necessary for any form of this idea to be correct for this gap to expand.

Personally I understand (and share) the appeal of various interesting speculative ideas and and the frustration that someone thinks that this is supposedly bad for some people, which seems against my instincts and the highly valuable norm of free marketplace of ideas.

At this point in time, however, the basilisk seems to be more often brought up in order to dismiss all LW, rather than only this specific idea, thus it is no wonder that many people get defensive even if they do not believe it.

All of this does not touch the question whether the whole situation was handled the way it should have been handled.

[1] Although the source says that famous quote is misattributed. Huh. I remember reading a similar idea in one of "Father Brown" short stories. I'll have to check it.

(excuse my english, feel free to correct mistakes)

Comment author: lmm 16 September 2014 10:26:59PM 1 point [-]

Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions. E.g. G.K.Chesterton would probably blame atheism[1]. Zooming out even more, for example, someone immersed in Eastern thought might even blame Western thought in general.

It's whatever makes LW different from the wider population, even the wider nerdy-western-liberal-college-educated cluster. The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them.

Despite receiving vastly disproportionate share of media attention it was such a small part of LessWrong history and thought (by the way, is anything that any LWer ever came up with a part of LW thought?) that it seems to wrong to put the blame on LessWrong or rationality in general.

It also received a disproportionate amount of ex cathedra moderator action. Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away.

Furthermore, which would you say is better, an ability to formulate an absurd idea and then find its flaws (or, for e.g. mathematical ideas, exactly under what strange conditions they hold) or inability to formulate an absurd ideas at all? Ability to come up with various absurd ideas is an unavoidable side effect of having imagination. What is important is not to start believing it immediately, because in the history of any really new and outlandish idea at the very beginning there is an important asymmetry (which arises due to the fact that coming up with any complicated idea takes time) - an idea itself has already been invented but the good counterarguments do not yet exist (this is similar to the situation where a new species is introduced to an island where it does not have natural predators, which are introduced only later). This also applies to the moment when a new outlandish idea is introduced to your mind and you haven't heard any counterarguments by that moment, one must nevertheless exercise caution. Especially if that new idea is elegant and thought provoking whereas all counterarguments are comparatively ugly and complicated and thus might feel unsatisfactory even after you heard them.

I don't think this addresses the original argument. If these ideas are dangerous to us then we are doing something wrong. If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample.

Was there really a significant number of people or is this just, well, an urban legend? The fact that some people are affected is not particularly surprising - it seems to be consistent with the existence of e.g. OCD. Again, one must remember that not everyone thinks the same way and the common thing between people affected might have been something other than acquaintance with LW and rationality which you seem to imply (correct me if my impression was wrong).

I don't know, but the LW leadership's statements seem to be grounded in the claim that there were

Comment author: ChristianKl 17 September 2014 01:38:56PM 3 points [-]

Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away.

At the time the Basilisk episode happened Eliezer was a lot more active in general then when the illegitimate downvoting happened.

If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample.

If you look at the self professed skeptic community there are episodes such as elevator gate.

If you go a bit further back and look at what Stalin did, I would call the ideas on which he acted dangerous.

The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them.

It's pretty easy to speak about a lot of topics in a way that the people you are talking to laugh and don't take the idea seriously. A bunch of that atheist population also treats their new atheism like a religion and closes itself from alternative ideas that sound weird. For practical purposes they are religious and do have a fence against taking new ideas seriously.

Comment author: Sarunas 17 September 2014 01:05:45PM *  2 points [-]

It's whatever makes LW different from the wider population, even the wider nerdy-western-liberal-college-educated cluster. The general population of atheists does not have problems with basilisks, and laughs them off when you describe them to them.

What ideas does the general population of atheists have in common besides the lack of belief in God? And what interesting ideas can you derive from that? F.Dostoevsky (who wasn't even an atheist) seems to have thought that from this one could derive that everything is morally permitted. Maybe some atheistic ideas seemed new, interesting and outlandish in the past when there were few atheists (e.g. separation of church and state), but nowadays they are part of common sense.

No, the claim of this hypothetical Chesterton would not be that atheism creates new weird ideas. It would be that by rejecting god you lose the defense against various weird ideas ("It’s the first effect of not believing in God that you lose your common sense." - G.K.Chesterton). It is not general atheism, it is specific atheist groups. And in the history of the world, there were a lot of atheists who believed in strange things. E.g. some atheists believe in reincarnation or spiritism. Some believe that the Earth is a zoo kept by aliens. In previous times, some revolutionaries (led not by their atheism, but by other ideologies) believed that just because the social order is not god given it could be easily changed into basically anything. The hypothetical Chesterton would probably claim that had all these people closely followed church's teachings they would not have believed in these follies since the common sense provided by the traditional christianity would have prevented them. And he would probably be right. The hypthetical Chesterton would probably think that the basilisk is yet another thing in the long list of things some atheists stupidly believe.

Yes, on LessWrong the weirdness heuristic is used less than in more general atheist/skeptic community (in my previous post I have already mentioned why I think it is often useful), and it is considered bad to dismiss the idea if the only counterargument to it is that is weird. Difference in acceptance of weirdness heuristic probably comes from different mentalities: trying to become more rational vs. a more conservative strategy of trying to avoid being wrong (e.g. accepting epistemic learned helplessness when faced with weird and complicated arguments and defaulting to the mainstream position). This difference may reduce a person's defenses against various new and strange ideas. But even then, one of the most upvoted LW posts of all times already talks about this danger.

Nevertheless, while you claim that general population of atheists "laughs them off when you describe them to them.", it is my impression that the same is true here, on LessWrong, as absolute majority of LWers do not consider it as a serious thing (sadly, I do not recall any survey asking about that). It is just a small proportion of LWers that believe in this idea. Thus it cannot be "whatever makes LW different from the wider population", it must be whatever makes that small group different from the wider LW population, because even after rejecting following tradition (which would be the hypothetical Chesterton's explanation) and diminished usage of weirdness heuristic (which would be average skeptic's explanation) majority of LWers still do not believe it. And the reasons why some LWers become defensive when someone brings it up are probably very similar to those described in a blog post "Weak Men Are Superweapons" by Yvain.

One could argue that LessWrong thought made it possible to formulate such an idea. Which I had already addressed in my previous post. Once you have a wide vocabulary of ideas you can come up with many things. It is important to be able to find out if the thing you came up with is true.

If these ideas are dangerous to us then we are doing something wrong. If you're saying that danger is an unavoidable cost of being able to generate interesting ideas, then the large number of other groups who seem to come up with interesting ideas without ideas that present a danger to them seems like a counterexample.

I do not think that thinking about basilisk is dangerous to us. Maybe it is to some people with OCD or something similar, I do not know. I talked about absurdity, not danger. It seems to me that instead of restricting our imagination (so as to avoid coming up with absurd things), we should let it run free and try improve our ability to recognize which of these imagined ideas are actually true.

It also received a disproportionate amount of ex cathedra moderator action. Which things are so important to EY that he feels it necessary to intervene directly and in a massively controversial way? By their actions we can conclude that the Basilisk is much more important to the LW leadership than e.g. the illegitimate downvoting that drove danerys away.

I do not know what exactly did Eliezer think when he decided. I am not him. In fact, I wasn't even there when it happened. I have no way of knowing whether he had actually had a clear reason at the time or simply freaked out and made an impulsive decision, or actually believed it at that moment (at least to the extent of being unable to immediately rule it out completely, which might have led to censor that post in order to postpone the argument). However, I have an idea which I find at least somewhat plausible. This is a guess.

Suppose even a very small number of people (let's say 2-4 people) were affected (again, let's remember that they would be very atypical, I doubt that having, e.g. OCD would be enough) in a way that instead of only worrying about this idea, they would actually take action and e.g. sell the large part of their possessions and donate it to (what was then known as) SIAI, leave their jobs to work on FAI or start donating all their income (neglecting their families) out of fear of this hypothetical torture. Now that would be a PR disaster many orders of magnitude larger than anything basilisk related we have now. Now, when people use the word "cult", they seem to seem to use it figuratively, as a hyperbole (e.g.), in that case people and organizations who monitor real cults would actually label SIAI as a literal one (whether SIAI likes it or not). Now that would be a disaster both for SIAI and the whole friendly AI project, possibly burying it forever. Considering that Eliezer worried about such things even before this whole debacle, it must have crossed his mind and this possibility must have looked very scary leading to the impulsive decision and what we can now see as improper handling of the situation.

Then why not claim that you do this for PR reasons instead of caring about psychological harm of those people? Firstly, one may actually care about those people, especially if one knows one of them personally (which seems to be the case from the screenshot provided by XiXiDu and linked by Jiro). And even in more general case, talking about caring usually looks better than talking about PR. Secondly, "stop it, it is for your own safety" probably stops more people from looking than "stop it, it might give us a bad PR" (as we can see from the recent media attention, the second reason stops basically nobody). Thirdly, even if Eliezer personally met all those people (once again, remember that they would be very atypical) affected and explicitly asked not to do anything, they would understand that he has SIAI PR at stake and thus an incentive to lie to them about what they should do, and they wouldn't want to listen to him (as even remote possibility of torture might seem scary) and, e.g. donate via another person. Or find their own ways of trying to create fAI. Or whatever they can come up with. Or find their ways to fight the possible creation of AI. Or maybe even something like this. I don't know, this idea did not cause me nightmares therefore I do not claim to understand the mindset of those people. Here I must note that in no way I am claiming that because a person has an OCD they would actually do that.

Nowadays, however, what most people seem to want to talk about is not the idea of a basilisk itself, but rather the drama surrounding it. As it is sometimes used to dismiss all LW (again, for reasons similar to this), many people get very defensive and pattern match those who bring this topic up with an intent of potentially discussing it (and related events) to trolls who do it just for the sake of trolling. Therefore this situation might feel worse for some people, especially those who are not targeted by the mass downvoting or have so much karma they can't realistically be significantly affected by it.

I feel like I am putting a lot of effort to steelman everything. I guess I, too, got very defensive, potentially for the reasons mentioned in that SlateStarCodex post. Well, maybe everything was just a combination of many stupid decisions, impulsive behaviour and after-the-fact rationalizations, which, after all, might be the simplest explanation. I don't know. Well, as I wasn't even there, there must people who would be better informed about the events and better suited to argue.

Comment author: lmm 17 September 2014 11:46:08PM 1 point [-]

I think many users do not think it's a serious danger, but it's still banned here. It is IMO reasonable for outsiders to judge the community as a whole by our declared policies.

Coming up with absurd ideas is not a problem. Plenty of absurd things are posted on LW all the time. The problem is that the community took it as a genuine danger.

If EY made a bad decision at the time that he now disagreed with, surely he would have reversed it or at least dropped the ban for future posts. A huge part of what this site is all about is being able to recognize when you've made a mistake and respond appropriately. If EY is incapable of doing that then that says very bad things about everything we do here.

What's cultish as hell to me is having leaders that would wilfully deceive us. If there are some nonpublic rules under which the basilisk is being censored, what else might also be being censored?

Comment author: Sarunas 18 September 2014 02:15:30PM *  1 point [-]

Well, nobody in LW community is without flaws. People often fail (or sometimes not even try) to live up to the high standards of being a good rationalist. The problem is that in some internet forums "judging the community" somehow becomes something like "this is what LW makes you to believe, and even if they deny it, they do it only because not doing it would give them a bad image" or "they are a cult that wants you to believe in their robot god" which are such gross misrepresentations of LW (or even thedrama surrounding the basilisk stuff) that even after considering Hanlon's razor one is left wondering whether that level of misinterpretation is possible without at least some amount of intentional hostility. I would guess that nowadays a large part of annoyance at somebody even bringing this topic up is a reaction to this perceived hostility.

If EY is incapable of doing that then that says very bad things about everything we do here.

No, neither it says very bad things about everything we do here, nor about everything we do here. Whenever EY makes a mistake and fails to recognize and admit it, it is his personal failing to live up to the standards he wrote about so much. You may object that not enough people called him out on that on LW itself, but it was my impression that many of those that do e.g. on reddit seem to be LW users (as currently there are few related discussions here on LW, there is no context to do that here, besides, EY rarely comments here anymore). In addition to that on this thread there seems to be several LW users who agree with you, thus definitely you are not a lone voice, among LWers there seem to be many different opinions. Besides, on that reddit thread he seems to basically admit that, in fact, he did make a lot of mistakes in handling this situation.

It has just dawned to me that while we are talking about censorship, at the same time we are having this discussion. And frankly, I do not remember when was the last time a comment was deleted solely for bringing this topic up. Maybe the ban has been silently lifted or at least is no longer ever enforced (even though there were no public announcement about this), leaving everything to the local social norm? However, I would guess that due to the said social norm one could predict that if one posted about this topic, unless one made really clear that one is bringing this topic up due to the genuine curiosity (and having a genuinely interesting question) and not for the sake of trolling or "let's see what will happen", o trying to make fun of people and their identity, one would receive a lot of downvotes (due to being pattern matched, which sustains the social norm of not touching this topic). I feel that I should add, that I wouldn't advice you to test this hypothesis, because that would probably be considered as bringing the topic up for the sake of bringing it up. I'm not claiming the situation is perfect, and I would agree that in the ideal case, the free marketplace of ideas should prevail and this discrepancy between the current situation and the ideal case should be solved somehow.

Comment author: Jiro 17 September 2014 03:03:08PM *  3 points [-]

Then why not claim that you do this for PR reasons instead of caring about psychological harm of those people? Firstly, one may actually care about those people, especially if one knows one of them personally (which seems to be the case from the screenshot provided by XiXiDu and linked by Jiro).

XiXiDu's screenshot is damning because it indicates that Eliezer banned the Basilisk because he thought a variation on it might work, not because of either PR reasons or psychological harm.

Unless you think he was lying about that for the same reason he might want to lie about psychological harm.

Comment author: Sarunas 17 September 2014 08:21:23PM *  1 point [-]

Well, in that post by Xixidu, there is a quote by Mitchell Porter (that is approved by Eliezer) that, combined with the [reddit post] I have linked earlier, seems he was not able to provide a proof that no variation of basilisk would ever work given that there are more than one possible decision theory, including some exotic and obscure ones that are not yet invented (but who knows what will be invented in the future). Eliezer seems to think that humans minds are unable to actually rigorously follow such a decision theory strictly enough that would be required for such a concept to work. But the human ability is such a vague concept, it is not clear how one can give a formal proof.

However, it seems to me that an inability to provide a formal proof seems to be an unlikely reason to freak out. What (I guess) has happened was that this inability to provide a proof, combined with that unnamed SIAI person's nightmares (I would guess that Eliezer knows all SIAI people personally) and the fear of the aforementioned potential PR disaster might have resulted into the feeling of losing control of a situation and made him panic, thus resulting into that nervous and angry post, emphasizing the danger and need to protect some people (and leaving out cult PR reasons). This is my personal guess, I do not guarantee that it is correct.

Is an inability to actually deny a thing equivalent to a belief that negation of that belief has a positive probability? Well, logically they are somewhat similar, but these two ways to express similar ideas certainly have different connotations and leave very different impressions in the listener's mind what was the person's actual degree of belief.

(I must add that I personally do not like speculating about another person's motivations why he did what he did when I actually have no way of knowing them)

Comment author: Jiro 16 September 2014 02:34:15PM *  3 points [-]

Are you sure you have pinpointed the right culprit? Why exactly "rationality"? "Zooming in" and "zooming out" would lead to potentially different conclusions.

The quotes indicate that I'm not blaming rationality, I'm blaming something that's called rationality. You're replying as if I'm blaming real rationality, which I'm not.

Was there really a significant number of people or is this just, well, an urban legend?

Censoring substantial references to the basilisk was partly done in the name of protecting the people affected. This requires that there be a significant number of people, not just that there be the normal number of people who can be affected by any unusual idea.

I think it is better to give Eliezer a chance to explain himself why he did what he did.

His explanations have varied. The explanation you linked to is fairly innocuous; it implies that he is only banning discussion because people get harmed when thinking about it. Someone else linked a screengrab of Eliezer's original comment which implies that he banned it because it can make it easier for superintelligences to acausally blackmail us, which is very different from the one you linked.

Comment author: Sarunas 17 September 2014 01:06:22PM *  1 point [-]

Censoring substantial references to the basilisk was partly done in the name of protecting the people affected. This requires that there be a significant number of people, not just that there be the normal number of people who can be affected by any unusual idea.

Curiously, it is not necessary. For example, it would suffice that people who do the censoring overestimate the number of people that might need protection. Or consider PR explanation that I gave in another comment which similarly does not require a large number of people affected. Some other parts of your comment are also addressed there.

Comment author: Jiro 17 September 2014 02:24:40PM *  1 point [-]

It is certainly possible that few people were affected by the Basilisk, and the people who do the censoring either overestimate the number or are just using it as an excuse. But this reflects badly on LW all by itself, and also amounts to "you cannot trust the people who do the censoring", a position which is at least as unpopular as my initial one.

Comment author: Sarunas 17 September 2014 08:24:55PM *  2 points [-]

I would guess that the dislike of censorship is not an unpopular position, whatever its motivations.

Comment author: fubarobfusco 16 September 2014 03:50:40AM 5 points [-]

My contrarian idea: Roko's basilisk is no big deal, but intolerance of making, admitting, or accepting mistakes is cultish as hell.

Comment author: D_Malik 15 September 2014 06:28:32PM 16 points [-]

Having political beliefs is silly. Movements like neoreaction or libertarianism or whatever will succeed or fail mostly independently of whether their claims are true. Lies aren't threatened by the truth per se, they're threatened by more virulent lies and more virulent truths. Various political beliefs, while fascinating and perhaps true, are unimportant and worthless.

Arguing for or against various political beliefs functions mostly (1) to signal intelligence or allegiance or whatever, and (2) as mental masturbation, like playing Scrabble. "I want to improve politics" is just a thin veil that system 2 throws over system 1's urges to achieve (1) and (2).

If you actually think that improving politics is a productive thing to do, your best bet is probably something like "ensure more salt gets iodized so people will be smarter", or "build an FAI to govern us". But those options don't sound nearly as fun as writing political screeds.

(While "politics is the mind-killer" is LW canon, "believing political things is stupid" seems less widely-held.)

Comment author: [deleted] 17 September 2014 04:43:56PM 2 points [-]

Twelve people disagree with this? I'm surprised. I was going to downvote for ‘not in the spirit of the game, obviously not a contrarian view’, but I guess I was a victim of the typical mind fallacy.

Comment author: VAuroch 17 September 2014 04:09:06AM 3 points [-]

While I mostly agree, trying to devise political systems that would encourage a smarter populace (ex. SSC's Graduation Speech with the guaranteed universal income and abolishing public schools) seems like a potentially worthwhile enterprise.

Comment author: CellBioGuy 15 September 2014 07:44:22PM 12 points [-]

[opening post special voting rules yadda yadda]

Biological hominids descended from modern humans will be the keystone species of biomes loosely descended from farms pastures and cities optimized for symbiosis and matter/energy flow between organisms, covering large fractions of the Earth's land, for tens of millions of years. In special cases there may be sub-biomes in which non-biological energy is converted into biomass, and it is possible that human-keystone ocean-based biomes might appear as well. Living things will continue to be the driving force of non-geological activity on Earth, with hominid-driven symbiosis (of which agriculture is an inefficient first draft) producing interesting new patterns materials and ecosystems.

Comment author: lmm 15 September 2014 09:17:22PM *  8 points [-]

[Please read the OP before voting. Special voting rules apply.]

Politically, the traditional left is broadly correct.

Comment author: jsteinhardt 15 September 2014 05:33:56PM 15 points [-]

[Please read the OP before voting. Special voting rules apply.]

Frequentist statistics are at least as appropriate as, if not more appropriate than, Bayesian statistics for approaching most problems.

Comment author: [deleted] 15 September 2014 04:15:09PM *  18 points [-]

AI boxing will work.

EDIT: Used to be "AI boxing can work." My intent was to contradict the common LW positions that AI boxing is either (1) a logical impossibility, or (2) more difficult or more likely to fail than FAI.