Rationality Quotes from people associated with LessWrong
The other rationality quotes thread operates under the rule:
Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
Lately it seems that every MIRI or CFAR employee is excempt from being quoted.
As there are still interesting quotes that happen on LessWrong, Overcoming Bias, HPMoR and MIRI/CFAR employee in general, I think it makes sense to open this thread to provide a place for those quotes.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (62)
Not a very advanced idea, and most people here probably already realised it -- I did too -- but this essay uniquely managed to strike me with the full weight of just how massive the gap really is.
I used to think "human brains aren't natively made for this stuff, so just take your biases into account and then you're good to go". I did not think "my god, we are so ridiculously underequipped for this."
-Luke, Pale Blue Dot
Stirring quotes from this video about the Singularity Institute (MIRI
IMO synthetic biology constitutes a third domain of advancement - the future of the living world
Isn't that a subset of the material world? I imagine nanotechnology is going to play a part in medicine and the like too, eventually.
Of course, more than one thing can be about the future of the somethingsomething world.
Anything is a subset of another thing in one dimension or another.
EY - Interlude with the Confessor
EY is right by contemporary theories:
Though he's making a very different point, I'd like to point something else out inspired by this piece that I do not feel would fit in with the narrative at the generic thread.
In my opinion, violence against men or intimate partner violence as a gender neutral construct is equally important, but more neglected yet, from a more neutral piece, as tractable as violence against women.
To satisfy anyone's curiosity, I identify neither as a feminist, nor a men's rights activists, nor as a humanist, but a rationalist.
kamenin on Collapse Postulates
Eliezer Yudkowsky
-Robin Hanson, on a Blogginheads.tv conversation with Daniel Sarewitz. Sarewitz was spending a lot of time criticizing naive views which many smart people hold about human enhancement.
It is tempting but false to regard adopting someone else's beliefs as a favor to them, and rationality as a matter of fairness, of equal compromise. Therefore it is written "Do not believe you do others a favor if you accept their arguments; the favour is to you." -- Eliezer Yudkowsky
-- Scott Alexander, On first looking into Chapman’s “Pop Bayesianism”
— Eliezer Yudkowsky, Creating Friendly AI
— Eliezer Yudkowsky, Creating Friendly AI
— Steve Rayhawk, commenting on Wei Dai's "Towards a New Decision Theory"
— Steve Rayhawk
-- NancyLebovitz
You can find the comment here, but it is even better when taken completely out of context.
Experience is the result of using the computing power of reality.
-- Roland
Quoting yourself is probably a bit too euphoric even for this thread.
-- Nominull3 here, nearly six-years old quote
"Taking up a serious religion changes one's very practice of rationality by making doubt a disvalue." ~ Orthonormal
"How do you not have arguments with idiots? Don't frame the people you argue with as idiots!"
-- Cat Lavigne at the July 2013 CFAR workshop
I predict the opposite effect. Framing idiots as idiots tends to reduce the amount that you end up arguing (or otherwise interacting) with them. If a motivation for not framing people as idiots is required look elsewhere.
If idiots do exist, and you have reason to conclude that someone is an idiot, then you shouldn't deny that conclusion -- at least when you subscribe to an epistemic primacy: that forming true beliefs takes precedence over other priorities.
The quote is suspiciously close to being a specific application of "Don't like reality? Pretend it's different!"
That's ... not quite what "framing" means.
That can be a useful method of learning. Pretend it's different, act accordingly, and observe the results.
This is more to address the common thought process "this person disagrees with me, therefore they are an idiot!"
Even if they aren't very smart, it is better to frame them as someone who isn't very smart rather than a directly derogatory term "idiot."
(Certainly not my criterion, nor that of the LW herd/caravan/flock, a couple stragglers possibly excepted.)
I think you missed a trick here...
The term 'idiot' contains a value judgement that a certain person isn't worth arguing with. It's more than just seeing the other person has having an IQ of 70.
Trying to understand the world view of someone with an IQ of 70 might still provide for an interesting conversation.
Except that often it can't be avoided/ is "worth" it if only for status/hierarchy squabbling reasons (i.e. even when the arguments' contents don't matter).
That's why it's not a good idea to think of others as idiots.
Indeed, just as it can be smart to "forget" when you have a terminal condition. The "pretend it's different" from my ancestor comment sometimes works fine from an instrumental rationality perspective, just not from an epistemic one.
Whether someone is worth arguing with is a subjective value judgement.
And given your values you'd ideally arrive at those through some process other than the one you use to judge, say, a new apartment?
I think that trying to understand the worldview of people who are very different from you is often useful.
Trying to explain ideas in a way that you never explained them before can also be useful.
I agree. I hope I didn't give the impression that I didn't. Usefulness belongs to instrumental rationality more so than to epistemic rationality.
That quote summarizes a good amount of material from a CFAR class, and presented in isolation, the intended meaning is not as clear.
The idea is that people are too quick to dismiss people they disagree with as idiots, not really forming accurate beliefs, or even real anticipation controlling beliefs. So, if you find yourself thinking this person you are arguing with is an idiot, you are likely to get more out of the argument by trying to understand where the person is coming from and what their motivations are.
Having spent some time on the 'net I can boast of considerable experience of arguing with idiots.
My experience tells me that it's highly useful to determine whether one you're arguing with is an idiot or not as soon as possible. One reason is that it makes it clear whether the conversation will evolve into an interesting direction or into the kicks-and-giggles direction. It is quite rare for me to take an interest in where a 'net idiot is coming from or what his motivations are -- because there are so many of them.
Oh, and the criteria for idiotism are not what one believes and whether his beliefs match mine. The criteria revolve around ability (or inability) to use basic logic, tendency to hysterics, competency in reading comprehension, and other things like that.
Yes, but fishing out non-idiots from say Reddit's front page is rather futile. Non-idiots tend to flee from idiots anyway, so just go where the refugees generally go to.
LW as a refugee camp... I guess X-D
Qiaochu_Yuan
I really want to see the context for this.
http://lesswrong.com/lw/g9l/course_recommendations_for_friendliness/#comments
"Goedel's Law: as the length of any philosophical discussion increases, the probability of someone incorrectly quoting Goedel's Incompleteness Theorem approaches 1"
--nshepperd on #lesswrong
The probability that someone will say bullshit about quantum mechanics approaches 1 even faster.
I love that 'bullshit' is now an academic term.
At least, the possible worlds in which they don't start collapsing... Or something...
There's a theorem which states that you can never truly prove that.
That doesn't say much; perhaps it approaches 1 as 1 - 1/(1+1/2+1/3...+1/n)?
</pedantic>
I like your example, it implies that the longer the discussion goes, the less likely it is that somebody misquotes G.I.T. in any given statement (or per unit time etc). Kinda the opposite of what the intent of the original quote seems to be.
Yea, but it's clear what he's trying to convey: For any event that has some (fixed) episolon>0 probability of happening, it's gonna happen eventually if you give it enough chances. Trivially includes the mentioning of Gödel's incompleteness theorems.
However, it's also clear what the intent of the original quote was. The pedantry in this case is fair game, since the quote, in an attempt to sound sharp and snappy and relevant, actually obscures what it's trying to say: that Gödel is brought up way too often in philosophical discussions.
Edit: Removed link, wrong reference.
This is not true (and also you mis-apply the Law of large Numbers here). For example: in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it. Even if the number of coin tosses is infinite (whatever that might mean).
Interestingly, I read the original quote differently from you - I thought the intent was to say "any bloody thing will be brought up in a discussion, eventually, if it is long enough, even really obscure stuff like G.I.T.", rather than "Gödel is brought up way too often in philosophical discussions". What did you really mean, nsheppered???
It was the latter. Also I am assuming that you haven't heard of Godwin's law which is what the wording here references.
... any event for which you don't change the epsilon such that the sum becomes a convergent series. Or any process with a Markov property. Or any event with a fixed epsilon >0.
That should cover round about any relevant event.
Explain.
Law of Large Numbers states that sum of a large amount of i.i.d variables approaches its mathematical expectation. Roughly speaking, "big samples reliably reveal properties of population".
It doesn't state that "everything can happen in large samples".
Thanks. Memory is more fragile than thought, wrong folder. Updated.
Perhaps the rule should be "Rationality Quotes from people associated with LessWrong that they made elsewhere", which would be useful, but not simply duplicate other parts of LW.
I think let's see what happens.
I think the rule should be simply the exact converse of the existing Rationality Quotes rule, so every good quote has a home in exactly one such place.
This would be ideal. I like the notion of having a place for excellent rationalist quotes but like having the "non-echo chamber" rationality quotes page too.
How about a waiting period? I'm thinking that quotes from LW have to be at least 3 years old. It's way of keeping good quotes from getting lost in the past while not having too much redundancy here.
I think three years is too long. I would imagine that there are a large number of useful quotes that are novel to many users that are much less than three years old.
Personally I would say we should just let it ride as is with no restrictions. If redundancy and thread bloat become noticeable issues then yeah, we might want to set up a minimum age for contributions.