IlyaShpitser comments on Open Thread, May 11 - May 17, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (247)
In what conceiveable (which does not imply logicality) universes would Rationalism not work in the sense of unearthing only some truths, not all truths? Some realms of truth would be hidden to Rationalists? To simplify it, I mean largely the aspect that of empiricism, of tying ideas to observations via prediction. What conceivable universes have non-observational truths, for example, Platonic/Kantian "pure apriori deduction" type of mental-only truths? Imagine for convenience's sake a Matrix type simulated universe, not necessarily a natural one, so it does not really need to be lawful nor unfold from basic laws.
Reason for asking: if you head over to a site like The Orthosphere, they will tell you Rationalism can only find some but not all truths. And one good answer would be: "This could happen in universes of the type X, Y, Z. What are your reasons for thinking ours could be one of them?"
Don't need to posit crazy things, just think about selection bias -- are the sorts of people that tend to become rationalist randomly sampled from the population? If not, why wouldn't there be blind spots in such people just based on that?
Yes, but if I get the idea right, it is to learn to think in a self-correcting, self-improving way. For example, maybe Kanazawa is right in intelligence suppressing instincts / common sense, but a consistent application of rationality sooner or later would lead to discovering it and forming strategies to correct it.
For this reason, it is more of the rules (of self-correction, self-improvement, self-updating sets of beliefs) than the people. What kinds of truths would be potentially invisible to a self-correcting observationalist ruleset even if this was practiced by all kinds of people?
Just pick any of a large set of things the LW-sphere gets consistently wrong. You can't separate the "ism" from the people (the "ists"), in my opinion. The proof of the effectiveness of the "ism" lies in the "ists".
Which things are you thinking of?
A lot of opinions much of LW inherited uncritically from EY, for example. That isn't to say that EY doesn't have many correct opinions, he certainly does, but a lot of his opinions are also idiosyncratic, weird, and technically incorrect.
As is true for most of us. The recipe here is to be widely read (LW has a poor scholarship problem too). Not moving away from EY's more idiosynchratic opinions is sort of a bad sign for the "ism."
Could you mention some of the specific beliefs you think are wrong?
Having strong opinions on QM interpretations is "not even wrong."
LW's attitude on B is, at best, "arguable."
Donating to MIRI as an effective use of money is, at best, "arguable."
LW consequentialism is, at best, "arguable."
Shitting on philosophy.
Ratonalism as part of identity (aspiring rationalist) is kind of dangerous.
etc.
What I personally find valuable is "adapting the rationalist kung fu stance" for certain purposes.
Thank you.
B?
Bayesian.
[Edited formatting] Strongly agree. http://lesswrong.com/lw/huk/emotional_basilisks/ is an experiment I ran which demonstrates the issue. Eliezer was unable to -consider- the hypothetical; it "had" to be fought.
The reason being, the hypothetical implies a contradiction in rationality as Eliezer defines it; if rationalism requires atheism, and atheism doesn't "win" as well as religion, then the "rationality is winning" definition Eliezer uses breaks; suddenly rationality, via winning, can require irrational behavior. Less Wrong has a -massive- blind spot where rationality is concerned; for a web site which spends a significant amount of time discussing how to update "correctness" algorithms, actually posing challenges to "correctness" algorithms is one of the quickest ways to shut somebody's brain down and put them in a reactionary mode.
I don't think that's argued. It's also worth noting that the majority of MIRI's funding over it's history comes from a theist.
It seems to me that he did consider your hypothetical, and argued that it should be fought. I agree: your hypothetical is just another in the tedious series of hypotheticals on LessWrong of the form, "Suppose P were true? Then P would be true!"
BTW, you never answered his answer. Should I conclude that you are unable to consider his answer?
Eliezer also has Harry Potter in MoR withholding knowledge of the True Patronus from Dumbledore, because he realises that Dumbledore would not be able to cast it, and would no longer be able to cast the ordinary Patronus.
Now, he has a war against the Dark Lord to fight, and cannot take the time and risk of trying to persuade Dumbledore to an inner conviction that death is a great evil in order to enable him to cast the True Patronus. It might be worth pursuing after winning that war, if they both survive.
All this has a parallel with your hypothetical.
I 've notice that problem, but I think it is a bit dramatic to call it rationality breaking. I think it's more of a problem of calling two things, the winning thing amd the truth seeking thing, by one name.
Well...
QM: Having strong positive beliefs on the subject would be not-even-wrong. Ruling out some is much less so. And that's what he did. Note, I came to the same conclusion long before.
MIRI: It's not uncritically accepted on LW more than you'd expect given who runs the joint.
Identity: If you're not letting it trap you by thinking it makes you right, if you're not letting it trap you by thinking it makes others wrong, then what dangers are you thinking of? People will get identities. This particular one seems well-suited to mitigating the dangers of identities.
Others: more clarification required
I think there's plenty of criticism voiced about that concept on LW and there are articles advocating to keep one's identity small.
And yet...