Rationality Quotes October 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (313)
--Paul Graham (When I saw this quote, I thought it had to have been posted before, but googling turned up nothing.)
The closest you can come to getting an actual "A for effort" is through creating cultural content, such as a Kickstarter project or starting a band. You'll get extra success when people see that you're interested in what you're doing, over and beyond as an indicator that what you'll produce is otherwise of quality. People want to be part of something that is being cared for, and in some cases would prefer it to lazily created perfection.
I'd still call it though an "A for signalling effort."
A good effort doesn't result in valuable software, but it could result in you learning to program better, increasing your human capital.
That's not necessarily false, but it's a dangerous thing to say to yourself. Mostly when I find myself thinking it, I've just wasted a great deal of time, and I'm trying to convince myself that it wasn't really wasted. It's easy to tell myself, hard to verify, and more pleasant than thinking my time-investment was for nothing.
It sure seems like a step up from when your time is really wasted, and you spent it all playing on the computer.
It's a continuum. I certainly wouldn't call a time when you're having fun and training your reflexes or pattern matching ability wasted. Or sleep. Or even sitting around anywhere where you can think stuff and meditate. The only wasted time is the one spent in to much pain to even think.
I disagree with this quote. In the real world, many things are't all or nothing. The equivalent of a good effort isn't not producing any software, it's producing software that's marginally worse than the best software you could produce. That software will sell marginally less well than the best software you could produce, and produce marginally less profit, but it will still sell.
This doesn't say software is all-or-nothing. Not producing the best software you can gets you money only if it (to some extent) still does what the customer needs. Besides misinformed customers, if it doesn't do what the customer needs, you do get nothing. If it is not-quite-perfect, it's the result that gets you your not-quite-what-it-could-have-been profit. Not the effort.
Completely wrong.
As a software engineer at a company with way too much work to go around, I can tell you that making a "good effort" goes a long way. 90% of the time you don't have to "make it work or get a zero". As long as you are showing progress you can generally keep the client happy (or at least not firing you) as you get things done, even if you are missing deadlines. And this seems very much normal to me. I'm not sure where in the market you have to "make it work or get a zero". I'm not even convinced that exists.
The essay is about startups. Perhaps they are different from your company. Also, getting things done but not in time for deadlines is not the same as not getting them done but making a good effort.
But eventually you do have to make sure that things are done and work.
Mmm, no, whether you like it or not people who live off rent-seeking do exist.
True, but no obviously opposed to the quote. Rents are not a reward for a good effort.
Ludwig Wittgenstein, Philosophical Investigations 116-117
Since English isn't Sound and like 90% of English words simply don't have real definitions, I'm not sure I want to tangle with this guy's work. It's either going to be tenuous logic with an exploration in equivocation, or a baffling/impressive display of linguistics. Which was it?
Well he did write it in German.
Philosophical Investigations is closer to the latter. (There's a big difference between Late and Early Wittgenstein - basically two completely different authors)
There is also a fair bit of continuity between the two--he retains one of the main theses of his earlier work: that much of our confusion about so called 'philosophical problems' is caused by people abusing language.
John Dolan, of all people.
Paul Graham
From the same article:
Also worthwhile from it:
("them" refers to labels like "x-ist" or "y-ic" used to tar positions by association, rather than demonstrating their falsity.)
-- Henry Ford
I wonder if that is true. I suspect a sufficiently competent personal marketer would be able to pull it off. Of course, it may be just as easy for them to build an equally positive reputation from absolutely nothing.
So they are building their reputation on their marketing skills, not on the future.
Which is to say, causality goes only one way.
It may, however, be possible to build a reputation on what you are preparing to do.
Anne Frank
(h/t Jonas Muller)
Alfred Korzybski - Science and Sanity Page 55
While this is true, it's often the case that you have to start by collecting the isolated facts, just as you'd start building a house by buying some number of bricks.
Arguably you'd start building a house by deciding what kind of house do you want and then making architectural plans and drawings...
Arguably, an early step in building a brick house will consist of gathering a bunch of bricks together. Arguably, that was obvious enough in the earlier comment to not benefit from correction.
Sure, but who claims/acts as if isolated facts do produce a science? This seems to be taking down a strawman.
Also, the analogy is misleading. A heap of bricks arranged in the right way with the right sorts of mutual connectors does produce a house. However, even an appropriately arranged and connected set of facts does not produce a science. At best, it produces a theory, which is a product of a science, but not a science itself. Science is more akin to architecture than to a house.
Science classes, especially before high school level, are often taught as though science is just a collection facts about trees or dinosaurs or whatever. Anyone who hasn't had the benefit of a good science program in their school might continue to think that science is just experiments to generate facts.
Korzybski is not here arguing against anything, but making an exposition. I won't type in the whole passage (which is only a Google search away anyway), but the quotation is from the beginning of chapter 4, entitled "On Structure", which is the first chapter of the second section of Science and Sanity, entitled "General on Structure". The first section, of three chapters, was introductory, an overture. He begins the main opera by drawing attention to two clear trends in the development of science: the increasing reliance on experiments, and the increase of verbal rigour. "The second tendency has an importance equal to that of the first; a number of isolated facts does not produce a science any more than a heap of bricks produces a house. The isolated facts must be put in order and brought into mutual structural relations in the form of some theory. Then, only, do we have a science."
-Charlie Munger
Hyperbole.
If we narrow the domain to software desgin and slap on some rigourous type systems and big unit testing suites it starts looking better for this statement.
On reflection, 'forgetting' is the wrong word here.
We don't default to being definite about anything, least of all our aims. Clear awareness has to be built and maintained, not merely uncovered.
Randall Munroe - Time
Followed by:
Richard Feynman Lectures on Physics
Neal Stephenson - "Quicksilver"
I suspect that many traditions and protocols promote competent decision making. Do you think that, say, the U.S. military would do better in Afghanistan if President Obama issued an order declaring "when in battle ignore all considerations of tradition and protocol"? Group coordination is hard, organizations put a huge amount of effort into it, and traditions and protocols often reflect their best practices.
"The Navy is a master plan designed by geniuses for execution by idiots. If you're not an idiot, but find yourself in the Navy, you can only operate well by pretending to be one." -Herman Wouk, The Caine Mutiny
That quote seems to be very good in making idiots who think they are not (the majority) to behave like idiots.
Yes, the quote is best modified to: "Whenever a small group of competent people..."
What strikes me most about this quote is how well Stephenson understands the psychology of his audience.
Whenever a group of subcompetent people get together to do something, they assume they are competent enough to throw tradition and protocol out the window...
Well designed traditions and protocols will contain elements that cause most subcompetent people to not want to throw them out.
Well designed traditions and protocols will contain elements that cause most competent people to not want to throw them out.
No. If an organization contains sub-competent people, it should take this into account when designing traditions and protocols.
Corollary: all organisations eventually contain sub-competent people. Design protocols accordingly.
If an organization contains sub-competent people, it's traditions and protocols need to ensure those people are quickly and reliably thrown out themselves.
Not necessarily, sub-competent people can still be useful, e.g., unskilled labor is a thing.
Unskilled and sub-competent are not synonyms in this context; even a ditch-digger can be competent, it just means they dig quickly regularly and with a minimum of fuss. And not arbitrarily throwing out protocols for momentary convenience is a matter of both maintaining regularity and minimizing fuss, so I shouldn't have to worry about the ditch-digging committee making a mess of things so long as they all have their heads screwed on straight.
Therefore, a reliable method for evaluating competency needs to be part of the traditions and protocols. Otherwise it's just a question of time...
Having just listened to much of the Ethical Injunctions sequence (as a podcast courtesy of George Thomas), I'm not so sure about this one. There are reasons for serious, competent people to follow ethical rules, even when they need to get things done in the real world.
Ethics aren't quite the same as tradition and protocol, but even so, sometimes all three of those things exist for good reasons.
Thomas Huxley
Extra Credits react to their surprise.
-Charles Stross, "Rule 34"
Why Everyone Else Is A Hypocrite, by Robert Kurzban, p. 6.
-- Greg Egan, "Distress".
Abraham Lincoln, Letter to Horace Greeley
-Robert Heinlein, Double Star
I have to confess this sounds creepy to me. I have a strong prior that the one who says something like this is about to do something horrible.
How is this a rationality quote, as opposed to simply a statement about a personal preference?
Paul Graham
Note: this isn't always right. Anyone giving advice is going to SAY it's true and non-obvious even if it isn't. "Don't fall into temptation" etc etc. But that essay was talking about mistakes which he'd personally often empirically observed and proposed counter-actions to, and he obviously could describe it in much more detail if necessary.
--Zack Weinersmith, SMBC rejected ideas
I am so stealing that for my next job interview.
A reply to the request in The Register for programmers to share their experiences working on computationally intractable tasks.
For the particular problem that comment is discussing (automatic code generation), I suspect that the CS people were describing about a general automatic code generation problem, and the engineers solved a relaxation to that problem which was not in fact intractable.
In general, I don't know how much I like the P-NP distinction. I hear from people who have been in the metaheuristics field for a while that until that became common knowledge, it was basically impossible to get a heuristic published (because you couldn't provably find the optimal solution). But it seems like that distinction leads to an uncanny valley of ignorance, where a lot of people avoid problems that are NP hard instead of looking in their neighborhood for problems that admit polynomial-time algorithms. (For example, instead of "find a tour that is not inferior to any other tour" use "find a good tour" for the TSP.)
David Deutsch, The Beginning of Infinity
I disagree with Deutsch, I think prediction is much more important to science than he makes it out to be.
The issue is the questions (about the future) you ask. Deutsch says
and, of course, that is true, but these are "uninteresting" questions to ask. Let me ask for different predictions: please predict what will happen to the balls if the cups are transparent. Please predict what will happen to the person being sawed in half if we take away three sides of the box he's in.
Given the proper questions one will have to understand "how the trick works" to produce correct forecasts.
Science is about predictions, provided you ask to predict the right thing.
Deutsch's point (made in greater length in the book) is that predictions are lower level than the true target of science- explanations- not that they aren't valuable. One of the main ways to test explanations is to get predictions from them, and then check out the predictions, and getting too many predictions wrong is fatal for an explanation.
Your example of "interesting" predictions highlights his point: the explanation of how the trick work can readily generate a prediction of what would happen if the cups were transparent, but the prediction that the cups would later be empty does not readily generate a prediction of what would happen if the cups were transparent. By focusing directly on explanations, he makes it obvious which predictions are the interesting ones. Indeed, I'd even speculate that someone who didn't have and couldn't acquire the concept of explanations would have trouble grasping the idea that some predictions are more 'interesting' than others and that there's a reliable way to determine which predictions those are.
Oh, I don't think so. If you're a medieval farmer, a prediction of the optimal time to plant is of extreme interest to you regardless of what kind of explanation is behind it. The Ptolemaic epicycles produced good predictions of much interest for a long time even though the explanation behind them was wrong.
Think about it this way: would you rather have a good prediction without an explanation or would you rather have an explanation that is unable to make successful predictions?
However I acknowledge that this is a "what's more important -- the chicken or the egg?" discussion :-)
I believe we have switched uses of the word "interesting."
This comparison, to me, maps on to "Would you rather have bricks that aren't arranged as a house, or a house made out of nothing?" Well, it's better to have the bricks than not, but the usefulness of a house depends on what it is made from, and a house made from nothing is useless (and very possibly harmful, if it prevents me from seeking out superior shelter).
That's what I meant by 'lower level'- a prediction is related to an explanation like a brick is related to a house. The statement "construction is about houses" does not mean that construction is not about bricks- but it does mean a focus on bricks for bricks' sake is not construction.
Not really, but it's my fault for not specifying better that I used "interesting" in the meaning elongated towards "useful" and not towards "fucking awesome".
Well, not the mapping for me. I view predictions as useful/consumable/what-you-actually-want/end result and I view explanations as a machine for generating predictions. So the image in my head is that you have a box with a hopper and a lever, you put the inputs into the hopper, pull the lever, and a prediction pops out.
Now sometimes that box is black and you don't know what's inside and how it works. This is a big minus because you trust the predictions less (as you should) and because your ability to manipulate the outcome by twiddling with the inputs is limited. However note that you can still empirically verify whether the (past) predictions are any good just fine.
Sometimes the box is transparent and you see all the pushrods and gears and whatnot inside. You can trace how inputs get converted to outputs and your ability to manipulate the outcome is much greater. You still have to empirically test your predictions, though.
And sometime the box is semi-transparent so that you see some outlines and maybe a few parts, the rest is fuzzy and uncertain.
If--, by Rudyard Kipling
Scott Alexander
Assuming the spell was keyed on "gender identity" and not any more objective aspect of gender/sex.
Mu means "no thing." Like "quality" it points outside the process of dualistic discrimination. Mu simply says, "no class: not one, not zero, not yes, not no." It states that the context of the question is such that a yes and a no answer is in error and should not be given. "Unask the question" is what it says.
.... [Somewhere later]
That Mu exists in the natural world investigated by science is evident. […] The dualistic mind tends to think of Mu occurrences in nature as a kind of contextual cheating, or irrelevance, but Mu is found through all scientific investigation, and nature doesn't cheat, and nature's answers are never irrelevant. It's a great mistake, a kind of dishonesty to sweep nature's Mu answers under the carpet. […]
When your answer to a test is indeterminate it means one of two things: that your test procedures aren't doing what you think they are or that your understanding of the context of the question needs to be enlarged. Check your tests and restudy the question. Don't throw away those Mu answers! They're every bit as vital as the yes and no answers. They're more vital. They're the ones you grow on.
--- Robert M Pirsig (Zen and the art of motorcycle maintenance.
That's a bad way of phrasing it. "Mu" is about maps, not territories. What is "evident" is that some models do not result in testable predictions (answerable questions). The rest of the quote is pretty good.
Agreed. I always skimmed over that claim and never wondered why. The map vs territory analogy makes a lot of sense. After all the 'Mu' is an answer to a question. And the question is based on some map of the territory. Thanks for triggering that series of clicks in my mind. :)
You only need one > character at the beginning of a paragraph (but you do need another one at the beginning of the next paragraph). If you'd like to have a quote as many lines, you need to make each its own paragraph by hitting return twice in between lines of text.
One of my professors, in a lecture covering compulsory cache misses and prefetching:
Unrelated to the rationality content of this quote, he thinks Moore's Law is a self-fulfilling prophecy, because how fast chip manufacturing improves depends on how hard engineers work, which depends on how hard they think their competition will work, which is an interesting idea that I hadn't heard before him.
The interesting part of Moore's Law is the fact that it's even possible. If there was a Moore's Law for the speed of motor vehicles it would soon fail regardless of how hard anyone tried to make it true.
That's because we're already at to the limits. There was a Moore's Law for the speed of transatlantic ships for about two centuries, and one for transatlantic flights for about half a century. (And I kind-of doubt Moore's Law will last for much longer.)
EDIT: though if you measure them in doubling times rather than in years, I agree that those for vehicles weren't anywhere near as impressive.
It's possible for a while, anyway; we're already reaching certain physical limits, and I suspect Moore's Law in the sense of shrinking silicon transistor sizes will be over in the near-ish future (order ~10 years).
It may just be that we started much further away from the optimum in this case than we did with things like motor vehicles (where there are fairly low limitations based on safety and human reaction times).
“For the sin of the idolater is not that he worships stone, but that he worships one stone over others.”
-Scott Bakker
This is actually just a chapter opener in a fantasy story, but I like it as a sort of short hand for the de-mystifying rainbows sequence. Everything is connected and that's ok.
Which story might that be?
-- Havelock Vetinari, Going Postal, Terry Pratchett
The hedgehog and the Fox: Hedgehogs "know one big thing" and have a theory about the world; they account for particular events within a coherent framework, bristle with impatience toward those who don't see things their way, and are confident in their forecasts. They are also especially reluctant to admit error. For hedgehogs, a failed prediction is almost always "off only on timing" or "very nearly right". They are opinionated and clear, which is exactly what television producers love to see on programs. Two hedgehogs on different sides of an issue, each attacking the idiotic ideas of the adversary, make for a good show. Foxes, by contrast, are complex thinkers. They don't believe that one big thing drives the march of history (for example they are unlikely to accept the view that Ronald Reagan single-handedly ended the cold war by standing tall against the Soviet Union). Instead the foxes recognize that reality emerges from interactions of many different agents and forces, including blind luck, often producing large and unpredictable outcomes.
~ Daniel Kahneman (Thinking, fast and slow)
Fox News: brought to you by a bunch of Hedgehogs.
All TV reporting is hedgehog-style: nuance is too confusing for the common people.
Woosh
Motto of the Royal Society.
I prefer the translation: "Take no one's word for it."
Same here, I just decided to go with the most literal one.
Megan McArdle
Either of those two things can be sufficient to make it advisable to prevent access to sharp objects. While the language sounds nicer, "valuing different things" and "assessment that various things will go wrong is different" would seem to incorporate "evil" and "stupid" quite comfortably.
IME, the latter are subsets of the former, and therefore require more evidence to pick out reliably.
Confirmation bias is an odd choice. I think I can see where she's coming from -- people assume members of their respective political outgroups to be inherently malicious, and form their judgments of specific actions accordingly -- but the assumption of malice does all the work there.
Hostile attribution bias seems like a better fit to me, maybe with a dash of outgroup homogenity.
In what way does expanding coverage reduces innovation? If anything more coverage means a bigger market for innovations.
By lowering prices for drugs, for example, more people can afford them but pharmaceutical firms have lower profit incentives to find new drugs. The medical device tax, furthermore, will help fund Obama care but also reduce incentives to develop new medical devices.
While true, people who are too stupid to be allowed near sharp objects have preferences and make choices that are not quite random. It is often (but not always) the case that given several alternatives, one can reliably predict towards which one most stupid people will gravitate.
Ducard: Are you ready to begin?
Bruce Wayne: I… I can barely stand...
Ducard: Death does not wait for you to be ready!! Death is not considerate, or fair! And make no mistake: here, you face Death.
Batman Begins, Christopher Nolan, 2005
Marry
Ada Lovelace
Maybe Einstein, maybe not.
This ought to be embedded deeply in the minds of everyone involved in education. Most regrettably, it is not.
Abstruse Goose (alt text)
The few times I have been in large groups of people objectively smarter than myself, I did not think anything remotely similar.
Robert Sapolsky
I read that whole quote twice through and even thought about it for a few minutes as well, but I have no idea what I'm supposed to get out of it. Could anyone help me out?
"Clear communication is good"?
And yet the quote fails to communicate clearly.
That's because the transcriber has done a very poor job of going from speech to text. If you quote somebody, you should delete the Likes and Ums, add commas and periods, etc. (With the added constraint of not knowingly changing meanings.)
I would be highly annoyed if I were transcribed thus.
Ironic, ain't it?
Writing for the general public is hard.
Intrinsic motivation matters.
jimmy, the resident non-evil cognitive engineer.
Nassim Taleb
I think he got it backwards? :P
I think I more or less agree with Taleb, so I will try to make it more plausible.
This suggests the HEURISTIC that there is more to be gained from stopping people shooting themselves (or each other) in the foot than there is from promoting people's happiness.
I'm pretty sure Taleb would agree it is only a heuristic, and that bednets are a legitimate counterexample & are in fact pretty great.
I suggest a new rule: the source of the quote should be at least three months old. It's too easy to get excited about the latest blog post that made the rounds on Facebook.
Stacker Pentecost, Pacific Rim: Tales from Year Zero
This use of bold lettering to show spoken emphasis is nonstandard in most contexts, but it is standard in comics and I would kind of like to see it come into broader use.
(Also:
s/contients/continents/.)It is, in fact, from comics. Another nonstandard habit is the frequent use of italics I've picked up from Eliezer Yudkowsky, along with other writing habits that would be qualified as "passionate" by some and "histrionic" by others. I myself find it quite practical in properly conveying emotional intensity.
I've been deitalicizing a bit lately.
We all grow old, don't we?
Nostalgic note: I remember back when I used to resent you for calling religion 'insanity'. Nowadays, I find it costs me strenuous effort to summon the very memory of a mindset where I could see it as anything but.
Similar here. I used to have some respect for the views of religious people, but it becomes more and more difficult to understand the way of thinking "some savages thousands of years ago had an imaginary invisible friend (usually telling them to kill everyone else), and despite all the knowledge and experience we have now, we should treat this invisible friend as a serious source of knowledge and morality (of course, avoiding those parts that are just too absurd and pretending they never happened)".
But I guess that's just human mind as usual. The more time I spend with people who believe in the fantasy land, the less silly the fantasy land seems. The more I think about what we know about reality, the more crazy it seems when someone comes and says, essentially, "but my invisible friend says so and so".
Now I wonder if I spend enough time without reading LessWrong and came back, which parts of LessWrong would seem crazy. -- I am not saying the situation is the same; I was impressed by LessWrong when I saw it for the first time; with religion I had to have religious friends for years just to move it from the "total craziness" category to "worth considering" category. But it is still possible that some parts of LessWrong would seem crazy.
There was a time when I was very rude to religious people because I thought that made me wise. Then there was a time when I was very polite because I thought equity in consideration was wise.
Now I'm just curt because I have science to do and no time to deal with fools.
Ah, yes :-D
There's only a certain amount of emphasis to go around. The more things you italicize, the less important each italicized word seems, and then when something's really important it doesn't stand out. It's like swearing---if I swear every time I spill a glass of water, then it loses its effect and when I drop a hammer on my toe there is nothing I can think of that will express the strength of my feelings.
In comics, the difference in weight between bold and standard is much less than in typical fonts. I think it works well in comics but here it makes me read things out of order in a distracting way.
I keep trying to tell my mom exactly this, every time we need to design some kind of print materials for the family business. She just doesn't get that emphasis is about the relative share of a reader's attention to different parts within a text, a positional good of sorts.
Oh, I keep getting that argument and I disagree completely. Swearing does not add nor substract emphasis; it is punctuation, placeholder words that might as well be onomatopeias. For an example of a character who swears constantly and still manages to highlight quite well differences in emotional intensity, I would suggest you look at Malcolm Tucker from british political satire The Thick Of It. For another who never swears yet also conveys utter fury, anger, frustration, pain, and so on impeccably, I would suggest having a look at any of the latest Doctors from Doctor Who. An angry David Tennant is a frightening frightening sight to behold. In the case of the hammer on your toe, I believe a heartfelt ARGH! does the trick nicely, with an extra hiss afterwards is you feel like it.
I personally find that part of the relief from swearing comes from breaking a taboo, and that this weakens over time. But perhaps watching The Thick Of It will reveal to me a more sustainable way.
As for italics, in the limit case where everything is in italics you surely would not conclude that THE WHOLE THING IS EXTRA SUPER IMPORTANT. So there's some crossover point; we just disagree on where it is. I believe my view is common at least for more formal (book-type) writing.
I suppose this is scoped to the statement "if I swear every time I spill a glass of water, then it loses its effect and when I drop a hammer on my toe there is nothing I can think of that will express the strength of my feelings?"
Because the overall point that emphasis must be conserved stands quite well.
Not really. Watch any opera or musical, listen to any speech; there's enough emphasis around to go on for hours and days, as long as you keep it varied and well-executed.
Heck, just marathon Gurren Lagann and tell me when you actually think the emphasis wears thin. My bet is, never.
In all of your examples, there are down times. Even Lagann.
I've actually just broken an italics-using habit when writing fiction. I used to use italics all the time for emphasis and making it clearer how the text would sound if read out loud (it felt clearer to me, at least.) A reader commented that the software I used to convert my MS Word draft to an epub converted all italics to bold, and that he found it disruptive, and had tried mentally reading the lines with and without the bold and having the emphasis didn't seem to make anything clearer. I used select-all on my MS Word document and removed all the italics in order to make him a new epub. Rereading scenes later, it turned out that my friend was right, and the lack of italics hardly seemed to make a difference. Now I don't use them period. (Once I stopped using italics constantly, it felt odd to use them occasionally.)
To function as a Human being, you are forced to accept a minimum level of deception in your life. The more complex and challenging your life the higher this minimum. At any given level of moral and intellectual development, there is an associated minimum level of deception in your life. If you aren't deceiving others, you are likely deceiving yourself. Or you're in denial
You can only lower the level of deception in your life through further intellectual and moral development. In other words, you have to earn higher levels of truth in your life.
--- VGRao (Be Slightly evil)
I can't find a specific meaning in this. What does "accept deception" mean: to lie to others, to pretend inability to see through specific lies of others, or to just be generally aware that more information on average contains more false information without know specifically which parts are false?
Ok May be that misses context. Further down in the text he categories 5 types of deception:
Hope that helps
Oh Dear Lord
This reminds me of a previous rationality quote:
--E.O. Wilson
Whenever I have a philosophical conversation with an artist, invariably we end up talking about reductionism, with the artist insisting that if they give up on some irreducible notion, they feel their art will suffer. I've heard, from some of the world's best artists, notions ranging from "magic" to "perfection" to "muse" to "God."
It seems similar to the notion of free will, where the human algorithm must always insist it is capable of thinking about itself on level higher. The artist must always think of his art one level higher, and try to tap unintentional sources of inspiration. Nonreductionist views of either are confusions about how an algorithm feels on the inside.
I don't think that this is an artist problem- I think this is a human problem, which a few scientists and philosophers have been forced to overcome in pursuit of truth.
Too many people have straw-vulcan notions of reductionism. (tvtropes warning)
I don't think it is complexity that makes art. I think it is emotion/feeling. Emotion/feeling may look like complexity to the rational mind because it does arise from a complex system which can be figured out bit by bit by the rational mind. But the essence of art is not to love anything that is complex and hard for the rational mind to figure out, but rather to focus on the feelings produced, the gestalt, the irrational, emotional connections and reactions.
-Rahul Bhattacharya on Humaira Bachal
Scott Adams arguing that "that human personalities are nothing but weaknesses and defects that we romanticize" and that God would have no weaknesses and therefore no personality.
Since the local idea of a superintelligence implies that a foomed AGI would have no human weaknesses (and likely a goal system that is so incomprehensible to humans as to be indistinguishable from no goal system at all), this would imply that we would not notice a superintelligence if it were staring us in the face, provided it decided to keep humans around. There are some obvious flaws in his speculation, however. But the post is still well worth reading.
No, "incomprehensible" doesn't imply "invisible". That's like saying that impersonal laws of physics are indistinguishable from having no laws of physics at all. Gravity is impersonal, yet it is personally observable.
In a similar way, we could observe an appearance of mountains of paperclips, even if we had no clue about why the AGI is doing that (assuming we would still exist and had an access to the AGI's code and data).
Provided it decided to keep humans around in their original environment. Otherwise we would notice a change in the environment, even if we couldn't discover its cause.
Daniel Dennet, Kinds of Minds
-- Oliver Cromwell
Previously posted two years ago. I'm curious if some things bear repeating. Is there any accepted timeframe for duplicates?
Currently, no. It seems worthwhile to keep old quotes visible, but I suspect that would be better accomplished by automatically generating a database of rationality quotes from these threads (like DanielVarga's best of collections), and then displaying a random one on each LW page with frequency related to the number of upvotes they received, say. I don't think that duplicating quotes in quote threads is a good idea, because this focuses effort on finding new quotes and material to incorporate into a growing body of knowledge rather than rehashing previously found knowledge.
I endorse (with the possibly-expected caveat about Wilson score ranking).
Unfortunately, I can't (don't know how to?) hack the LW backend. Is that something I can look into?
Does this idiom make sense to native English speakers?
It's archaic. The modern variant would be like "Please, for goodness's sake, consider that you could be mistaken," or "Please, for fuck's sake", or "Please, for the love of God," or so on.
Not especially. I sort of skip over it and the meaning "probably some shoddy translation of something that means to convey emphasis" appears in my head without me bothering to notice the words.
I'm not keen on this one. It has a sensible reading as an injunction to keep the support of one's prior wide, and if that is what one is reminded of by the maxim, that is fine. But too often I see in everyday discourse people saying "you've made your mind up!" as a criticism. The argument becomes a bodyguard to support a belief that has no other support.
Some Wikipedia scholarship indicates that the real situation behind the quote is unpromising for a clear moral about rationality. Cromwell made this appeal on the occasion of the Scots proclaiming Charles II their king instead of accepting Cromwell's rule. Being rebuffed, he conquered them, and it appears from this biography, p184ff that he would have had an easier job of it had he not taken the time to first invite their surrender. On the other hand, the Scots handcapped themselves by too strict an attention to the religious correctness of their generals and soldiers, at the expense of numbers in the field, and might even have benefitted from the lesser fervour that Cromwell suggested to them.
“By poet, I mean that farmer who plows his field with a plow that differs, however little, from the plow he inherited from his father, in order that someone will come after him to give the new plow a new name; I mean that gardener who breeds an orange flower and plants it between a red flower and a yellow flower, in order that someone will come after him to give the new flower a new name; or that weaver who produces on his loom patterns and designs that differ from those his neighbors weave, in order that someone will give his fabric a new name. By poet, I mean the sailor who hoists a third sail on a ship that has only two, or the builder who builds a house with two doors and two windows among houses built with one door and one window, or the dyer who mixes colors that no one before him has mixed, in order to produce a new color for someone who arrives later on to give the ship of the language a new sail, the house a new window, and the garment a new color.”
-Khalil Gibran, quoted in Reza Aslan's "Tablet and Pen"
By poet I would mean someone who writes poems.
Eh, probably. But given how we normally think about poetry and Middle Eastern culture, at least in Khalil Gibran's era (1900-1930), it's nice to see someone from that background talking about how awesome it is to build better boats. I like finding hints of modernism in unexpected places.
You can't call them 'inventors' though, because that's not as high-status as 'poet'.
It isn't? That's... broken.
Yes, that was my attempted point.
(In that vein my attempted point in reply approximately translates to "I agree with your point that it would be broken if true, am startled to hear that it actually is true but take your word for it".)
Did you mean innovator?
-Mr Spock, Star Trek II: The Wrath of Khan
In the land of the blind, the one eyed man is king^Wa dangerous heretic^W^W^Wangry that no one cares about visual design.
[where ^W represents the EMACS "delete word" command and my annoyance at LW's lack of a strikethrough formatting option]
That what you meant?
"That which can be destroyed by the truth should be." - P. C. Hodgell
Sam Harris
(I am aware that the first would be inappropriate alone, but I felt it provided the correct setup for the Sam Harris, which he said at the Festival of Dangerous Idea in the questions)
Added: The context of Harris was a questioner asking 'If contra-causal free will doesn't exist, then do our decisions to love people not exist?' Harris was saying his argument forced us to give up false beliefs and the false emotions that followed from them (hatred), but our belief that love exists is still correct.
So why specifically does hatred not survive when love does?
Er, well, you should listen to the talk. I was going to summarise it, but it's a really great talk, and I'm preparing a LessWrong post on it (although I've been thinking of doing that for about a year).
The general idea that I thought was relevant though, was the idea of stripping away all false beliefs and emotions, and that love is still a part of the world.
Is there a transcript anywhere? I can read much more quickly than I can listen, and the talk is pretty long.
I have to say that I'm skeptical though, that hatred would inherently be any more "false" than love.
I'd guess the talk is mostly a slightly inferior version of Harris' short ebook Free Will.
Due to fundamental attribution bias, understanding people's motivations deeply is likely to make you love them more and hate them less.
Well, statistically yes, but necessarily no. I've certainly encountered situations where the reverse was true.
This seems false. Love survives (and should survive) some truths but not others. There are some things that people can do which will cause other people to stop loving them. Revealing the truth about such things will tend to kill love.
---Count to a Trillion by John C. Wright
I'd note this novel was published long after Wright had his heart attack, hallucinated the Virgin Mary/Jesus/God/others, and converted to Catholicism.
The point of the quote stands. For that matter, apart from the times that Wright specifically talks about religious doctrine, he seems to have much the same views on other things and be much the same person as before his conversion, at least judging from his blog. (It starts in March 2003; he reports his conversion as a recent event in December 2003.)
If you read his conversion story, it is clear that to say "oh well, something went wrong with his brain" is facile. He had been moving in that direction for many years. He writes of himself before that episode:
As indeed it drove C.S. Lewis before him. I note that Lewis, Chesterton, and, for that matter, Wright enjoy a certain popularity at LessWrong, all of them having been frequently quoted with approval. People have also talked of the practical usefulness of spiritual exercises, and the concept of sin.
As I said on an earlier occasion, Lewis is laughing in his grave; and perhaps Wright will get the last laugh long before his.
Might i suggest a sweepstake on the date of the first long-time member of LW to announce their religious conversion? Personally I remain an unbeliever, but who can foretell their own future?
My objection here is not to the 'willpower yay!' bit, but to the multiple political digs interspersed in it, which substantially reduce the value of the quote for me, and I thought people were not noticing.
I am skeptical of his account. Everything is obvious in retrospect, and when someone is writing their conversion story, superimposing a 'journey to Catholicism' is easy. Just cherrypick.
He says he beat friends in arguments and showed their argument were bad? So what? I have beaten other LWers in arguments and show their understanding poor many times over the years, but if tomorrow I suffer brain damage and start worshipping Allah, it would be very easy for me to write 'despite being a frequent writer at transhumanist websites, I was nevertheless drifting away and routinely showing that my fellow transhumanists were horribly comically wrong about every basic point of philosophy, ethics and logic'; all it requires is a change of perspective.
We can see this hindsight on display right now in discussions of Silk Road. All over the place people are saying that the FBI knew who Ulbricht was from the start since there was a connection from his email address to an early mention of Silk Road, and how easy it would have been to de-anonymize Dread Pirate Roberts. Plausible... until we remember that no one in the world actually managed this despite intense interest by many people and organizations in SR, that if we had noticed the connection we had no good reason to believe that altoid/Ulbricht hadn't heard about SR through the Hidden Wiki or another discussion forum we simply didn't have access to or on a page that had linkrotted, that the indictments indicate that the FBI only managed to make the link much later after assigning someone fulltime to sift all Internet traces, and we're still not clear on whether they were sure DPR==Ulbricht until as late as June 2013.
(Assuming you believe that he's recounting the facts basically right. I believe Wright when he writes about his heart attack and hallucination as the reason for the conversion because it's a shockingly embarrassing way to convert, which invites even believers to write him off as believing due to neurological problems, and this has to be obvious to him; but that doesn't apply to his claims of having been tending toward Catholicism for years before.)
Oh, and something else to add: religious believers have this tendency to not understand that rationalists don't use quotes as arguments from authority. They quote people's words because the words make sense independently of the person. People who are "frequently quoted with approval" are quoted because they have frequently said things that make sense, not because anything they say is automatically right; if they shift to sayng things that don't make sense, the fact that they have been frequently quoted in the past won't carry over.
Just reading your own link, his "challenge" is something whose irrationality almost anyone here could see a mile off, and if he actually thought that that challenge made any sense, he must have had a second brain malfunction that led him to make the challenge before he had the one that happened after the challenge. (Or more realistically, I'd say he had an emotional breakdown first, then made the challenge, then had a physical brain malfunction.)
He also doesn't seem to understand the objections people gave to him. At the top of that very link he quotes someone asking why particularly Christianity since it seems so petty. His later reaction (after the brain malfunction) is "if science discovered tomorrow that the universe was half its apparent age, and estimated the stars as half their current number, would the belief in God somehow be twice as credible in your eyes?" Of course, to the extent that his God would seem less petty in a smaller universe, all the alternatives would seem less petty too.
It's also an incredible coincidence for a rational conversion (but not so incredible for a brain malfunction) that the religion he picked was one that was only a short distance, if at all, from the one predominant in his society and his upbringing. Why don't people in Christian societies ever ask God for a sign, get one, and turn into devout Muslims?
I am not saying twice credible, but it would be more credible. If science reduced the age of universe once, it may do it again, and who knows... there is a tiny chance it could go down to 6000 years.
More generally, smaller reliability of science would increase the probability that some intelligent agent is acting in the universe.
Problem is, increasing the probability from 0.0001 to 0.0002 is not the same thing as converting.
What hypothesis are you trying to refute with this question?
Edit: If it's the rational conversion hypothesis, note that people also are more likely to rationally convert to positions they've been exposed to, even in domains far away from religion. If it's the Catholicism is true hypothesis, this would not be surprising.
If it's the rational conversion hypothesis, then while people are more likely to rationally convert to positions they've been exposed to, it doesn't seem to me that they are enough more likely to explain the way conversions actually work. Furthermore, he supposedly got an experience directly from God. It wasn't a rational conversion in the sense of having been deduced from things he already knew, it was a new experience, and I wouldn't expect such things to be correlated with cultural context in the same way that ordinary rational conversions are. God can easily send Catholic experiences to Muslims and Muslim experiences to Catholics after all. Brain malfunctions, on the other hand, would be correlated with cultural context.
If it's the Catholiicism is true hypothesis, then this example would be unsurprising, but other examples involving other religions would be even more surprising than they are now.
Too lazy to address this comment. Luckily Scott Alexander has done so in delightful detail: http://slatestarcodex.com/2013/06/17/the-what-youd-implicitly-heard-before-telling-thing/ Tldr, the idea that Christianity is is more likely to be true because it is old and some of its ideas match our vocabulary and aesthetics is unconvincing because it is the very fact that it is old (and pervasive) that its vocabulary matches some of our ideas and intuitions. Its hard for a system to survive that long being completely wrong on every count. Pointing to things that the belief system got right is not very interesting. (Scott argues this case much better than I just did)
Also too late on the conversion thing, Leah Libresco converted to Catholicism some time ago.
Just to be clear beyond my closing aside that I remain an unbeliever, I am not defending Wright's (or Lewis's, or Chesterton's) argument here against anything but the knockdown that "oh well, something went wrong with his brain". Nor do I agree with Gwern's attribution of Wright's account of his pre-conversion self to hindsight bias, or "hindsight bias" becomes a universal counterargument against every account of past events.
More generally, one person's priors are not an argument against another's posteriors.
I'm pretty sure that's already happened.
See the last entry here.
I'm not convinced the whole thing is a decent rationality quote, as part of it seems to be Menelaus surrendering to the idea that "because Darwin discovered Natural Selection, he endorsed it".
On the other hand, "Some of his friends said you had to prick your finger with a pin to make the oath valid; and boys of particular boldness used a rusty pin, as if daring the Jihad plague to strike. Menelaus knew that was all nonsense: it was the willpower that decided oaths, nothing else. No pin would be as sharp as what he felt beating in his angry young heart." is brilliant: both understanding the inclination to irrationality, and also emphasising that rationality can be strengthened by emotion.
It appears to me that within the story, his knowledge of exactly who Darwin was has been greatly garbled by the processes of history. That's just a detail of the setting. My reading of Menelaus' attitude to evolution is that he is expressing much the same idea as Eliezer's characterisation of it as a blind idiot god that we should overcome and replace.
Great damage is usually caused by those who are too scrupulous to do small harm.
Source?