You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on Open Thread, May 11 - May 17, 2015 - Less Wrong Discussion

3 Post author: Gondolinian 11 May 2015 12:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 11 May 2015 08:18:59AM *  3 points [-]

Don't need to posit crazy things, just think about selection bias -- are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn't there be blind spots in such people just based on that?

Comment author: [deleted] 11 May 2015 08:40:52AM 1 point [-]

Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.

For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?

Comment author: IlyaShpitser 11 May 2015 08:49:46AM *  3 points [-]

Just pick any of a large set of things the LW-sphere gets consistently wrong. You can't separate the "ism" from the people (the "ists"), in my opinion. The proof of the effectiveness of the "ism" lies in the "ists".

Comment author: NancyLebovitz 11 May 2015 11:58:23AM 2 points [-]

Which things are you thinking of?

Comment author: IlyaShpitser 11 May 2015 02:38:57PM *  2 points [-]

A lot of opinions much of LW inherited uncritically from EY, for example. That isn't to say that EY doesn't have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.

As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY's more idiosynchratic opinions is sort of a bad sign for the "ism."

Comment author: NancyLebovitz 11 May 2015 02:44:49PM 1 point [-]

Could you mention some of the specific beliefs you think are wrong?

Comment author: IlyaShpitser 11 May 2015 02:58:42PM *  8 points [-]

Having strong opinions on QM interpretations is "not even wrong."

LW's attitude on B is, at best, "arguable."

Donating to MIRI as an effective use of money is, at best, "arguable."

LW consequentialism is, at best, "arguable."

Shitting on philosophy.

Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.

etc.


What I personally find valuable is "adapting the rationalist kung fu stance" for certain purposes.

Comment author: NancyLebovitz 11 May 2015 03:30:51PM 2 points [-]

Thank you.

LW's attitude on B is, at best, "arguable."

B?

Comment author: IlyaShpitser 11 May 2015 03:32:27PM 0 points [-]

Bayesian.

Comment author: Douglas_Knight 11 May 2015 05:30:24PM 1 point [-]

I read that "B" and assumed that you had a reason for not spelling it out, so I concluded that you meant Basilisk.

Comment author: OrphanWilde 11 May 2015 03:17:40PM *  2 points [-]

Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.

[Edited formatting] Strongly agree. http://lesswrong.com/lw/huk/emotional_basilisks/ is an experiment I ran which demonstrates the issue. Eliezer was unable to -consider- the hypothetical; it "had" to be fought.

The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn't "win" as well as religion, then the "rationality is winning" definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update "correctness" algorithms, actually posing challenges to "correctness" algorithms is one of the quickest ways to shut somebody's brain down and put them in a reactionary mode.

Comment author: ChristianKl 11 May 2015 04:25:40PM 0 points [-]

if rationalism requires atheism

I don't think that's argued. It's also worth noting that the majority of MIRI's funding over it's history comes from a theist.

Comment author: RichardKennaway 12 May 2015 12:05:43PM 1 point [-]

Eliezer was unable to -consider- the hypothetical; it "had" to be fought.

It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, "Suppose P were true? Then P would be true!"

BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?

Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.

Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.

All this has a parallel with your hypothetical.

Comment author: Miguelatron 15 May 2015 01:59:56PM 0 points [-]

While the MoR example is a good one, don't bother defending Eliezer's response to the linked post. "Something bad is now arbitrarily good, what do you do?" is a poor strawman to counter "Two good things are opposed to each other in a trade space, how do you optimize?"

Don't get me wrong, I like most of what Eliezer has put out here on this site, but it seems that he gets wound up pretty easily and off the cuff comments from him aren't always as well reasoned as his main posts. To allow someone to slide based on the halo effect on a blog about rationality is just wrong. Calling people out when they do something wrong - and being civil about it - is constructive, and let's not forget it's in the name of the site.

Comment author: Jiro 12 May 2015 03:45:20PM 0 points [-]

your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, "Suppose P were true? Then P would be true!"

The hypothetical (P) is used to get people to draw some conclusions from it. These conclusions must,by definition, be logically implied by the original hypothetical or nobody would be able to make them, so you can describe them as being equivalent to P. Thus, all hypotheticals can be described, using your reasoning, as "Suppose P were true? Then P would be true!"

Furthermore, that also means "given Euclid's premises, the sum of the angles of a triangle is 180 degrees" is a type of "Suppose P were true? Then P would be true!"--it begins with a P (Euclid's premises) and concludes something that is logically equivalent to P.

I suggest that an argument which begins with P and ends with something logically equivalent to P cannot be usefully described as "Suppose P would be true? Then P would be true!" This makes OW's hypothetical legitimate.

Comment author: OrphanWilde 12 May 2015 03:37:44PM -1 points [-]

If it isn't worth trying to persuade (whoever), he shouldn't have commented in the first place. There are -lots- of posts that go through Less Wrong. -That- one bothered him. Bothered him on a fundamental level.

As it was intended to.

I'll note that it bothered you too. It was intended to.

And the parallel is... apt, although probably not in the way that you think. I'm not Dumbledore, in this parallel.

As for his question? It's not meant for me. I wouldn't agonize over the choice, and no matter what decision I made, I wouldn't feel bad about it afterwards. I have zero issue considering the hypothetical, and find it an inelegant and blunt way of pitting two moral absolutes against one another in an attempt to force somebody else to admit to an ethical hierarchy. The fact that Eliezer himself described the baby eater hypothetical as one which must be fought is the intellectual equivalent to mining the road and running away; he, as far as I know, -invented- that hypothetical, he's the one who set it up as the ultimate butcher block for non-utilitarian ethical systems.

"Some hypotheticals must be fought", in this context, just means "That hypothetical is dangerous". It isn't, really. It just requires giving up a single falsehood:

That knowing the truth always makes you better off. That that which can be destroyed by the truth, should be.

He already implicitly accepts that lesson; his endless fiction of secret societies keeping dangerous knowledge from the rest of society demonstrate this. The truth doesn't always make things better. The truth is a very amoral creature; it doesn't care if things are made better, or worse, it just is. To call -that- a dangerous idea is just stubbornness.

Not to say there -isn't- danger in that post, but it is not, in fact, from the hypothetical.

Comment author: TheAncientGeek 12 May 2015 05:00:22PM *  0 points [-]

I 've notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it's more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.

Comment author: OrphanWilde 12 May 2015 05:27:39PM 0 points [-]

Do you really think there's a strong firewall in the minds of most of this community between the two concepts?

More, do you think the word "rationality", in view of the fact that the word that happens to refer to two concepts which are in occasional opposition, makes for a mentally healthy part of one's identity?

Eliezer's sequences certainly don't treat the two ideas as distinct. Indeed, if they did, we'd be calling "the winning thing" by its proper name, pragmatism.

Comment author: Luke_A_Somers 12 May 2015 12:19:38AM 0 points [-]

Well...

QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that's what he did. Note, I came to the same conclusion long before.

MIRI: It's not uncritically accepted on LW more than you'd expect given who runs the joint.

Identity: If you're not letting it trap you by thinking it makes you right, if you're not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.

Others: more clarification required

Comment author: ChristianKl 11 May 2015 04:21:16PM 0 points [-]

Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.

I think there's plenty of criticism voiced about that concept on LW and there are articles advocating to keep one's identity small.

Comment author: IlyaShpitser 11 May 2015 04:37:07PM 2 points [-]

And yet...

Comment author: ChristianKl 11 May 2015 04:56:28PM 0 points [-]

From time to time people use the label aspiring rationalist but I don't think a majority of people on LW do.