Rationality Quotes September 2011
Here's the new thread for posting quotes, with the usual rules:
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (482)
-- Richard Dawkins, The Selfish Gene
(I know it's old and famous and classic, but this doesn't make it any less precious, does it?)
Sometimes I suspect that wouldn't even occur to them as a question. That evolution might turn out to be one of those things that it's just assumed any race that had mastered agriculture MUST understand.
Because, well, how could a race use selective breeding, and NOT realise that evolution by natural selection occurs?
Easily.
Realizing far-reaching consequences of an idea is only easy in hindsight, otherwise I think it's a matter of exceptional intelligence and/or luck. There's an enormous difference between, on the one hand, noticing some limited selection and utilising it for practical benefits - despite only having a limited, if any, understanding of what you're doing - and on the other hand realizing how life evolved into complexity from its simple beginnings, in the course of a difficult to grasp period of time. Especially if the idea has to go up against well-entrenched, hostile memes.
I don't know if this has a name, but there seems to exit a trope where (speaking broadly) superior beings are unable to understand the thinking and errors of less advanced beings. I first noticed it when reading H. Fast's The First Men, where this exchange between a "Man Plus" child and a normal human occurs:
"Can you do something you disapprove of?" "I am afraid I can. And do." "I don't understand. Then why do you do it?"
It's supposed to be about how the child is so advanced and undivided in her thinking, but to me it just means "well then you don't understand how the human mind works".
In short, I find this trope to be a fallacy. I'd expect an advanced civilisation to have a greater, not lesser, understanding of how intelligence works, its limitations, and failure modes in general.
But what reason do we have to expect them to pick evolution, as opposed to the concept of money, or of extensive governments (governments governing more than 10,000 people at once), or of written language, or of the internet, or of radio communication, or of fillangerisation, as their obvious sign of advancement?
Just because humans picked up on evolution far later than we should have, doesn't mean that evolution is what they'll expect to be the late discovery. They might equally expect that the internet wouldn't be invented until the equivalent tech level of 2150. Or they might consider moveable type to be the symbol of a masterful race.
Just because they'll likely be able to understand why we were late to it, doesn't mean it would occur to them before looking at us. It's easy to explain why we came to it when we did, once you know that that's what happened, but if you were from a society that realised evolution [not necessarily common descent] existed as they were domesticating animals; would you really think of understanding evolution as a sign of advancement?
EDIT: IOW: I've upvoted your disagreement with the "advanced people can't understand the simpler ways" trope; but I stand by my original point: they wouldn't EXPECT evolution to be undiscovered.
I suspect that the intent of the original quote is that they'll assess us by our curiosity towards, and effectiveness in discovering, our origins. As Dawkins is a biologist, he is implying that evolution by natural selection is an important part of it, which of course is true. An astronomer or cosmologist might consider a theory on the origins of the universe itself to be more important, a biochemist might consider abiogenesis to be the key, and so on.
Personally, I can see where he's coming from, though I can't say I feel like I know enough about the evolution of intelligence to come up with a valid argument as to whether an alien species would consider this to be a good metric to evaluate us with. One could argue that interest in oneself is an important aspect of intelligence, and scientific enquiry important to the development of space travel, and so a species capable of travelling to us would have those qualities and look for them in the creatures they found.
This is my time posting here, so I'm probably not quite up to the standards of the rest of you just yet. Sorry if I said something stupid.
Welcome to lesswrong.
I wouldn't consider anything you've said here stupid, in fact I would agree with it.
I, personally, see it as a failure of imagination on the part of Dawkin's, that he considers the issue he personally finds most important to be that which alien intelligences will find most important, but you are right to point out what his likely reasoning is.
I think you're interpreting the quote too literally, it's not a statement about some alien intelligences but an allegory to communicate just how important the science of evolution is.
Another chain of reasoning I have seen people use to reach similar conclusions is that the aliens are looking for species that have outgrown their sense of their own special importance to the universe. Aliens checking for that would be likely to ask about evolution, or possibly about cosmologies that don't have the home planet at the center of the universe. However, I don't think a sense of specialness is one of the main things aliens will care about.
Have you never looked at something someone does and asked yourself, "How can they be so stupid?"
It's not as though you literally cannot conceive of such limitations; just that you cannot empathize with them.
It's anthropomorphism to assume that it would occur to advanced aliens to try to understand us empathetically rather than causally/technically in the first place, though.
Anthropomorphism? I think not. All known organisms that think have emotions. Advanced animals demonstrate empathy.
Now, certainly it might be possible that an advanced civilization might arise that is non-sentient, and thus incapable of modeling other's psyche empathetically. I will admit to the possibility of anthropocentrism in my statements here; that is, in my inability to conceive of a mechanism whereby technological intelligence could arise without passing through a route that produces intelligences sufficiently like our own as to possess the characteristic of 'empathy'.
It's one thing to postulate counter-factuals; it's another altogether to actually attempt to legitimize them with sound reasoning.
Yeah. This was put very well by Fyodor Urnov, in an MCB140 lecture:
"What is blindingly obvious to us was not obvious to geniuses of ages past."
I think the lecture series is available on iTunes.
The British agricultural revolution involved animal breeding starting in about 1750. Darwin didn't publish Origin of Species until 1859, so in reality it took about 100 years for the other shoe to drop.
Selective breeding had been around much longer than that.
Selective breeding isn't necessarily the same as artificial selection, however. The taming of dogs and cats was largely considered accidental; the neotenous animals were more human-friendly and thus able to access greater amounts of food supplies from humans until eventually they could directly interact, whereupon (at least in dogs) "usefulness" became a valued trait.
There wasn't purposefulness in this; people just fed the better dogs more and disliked the 'worse' dogs. It wasn't until the mid-1700's that dog 'breeds' became a concept.
There were certainly attempts to breed specific traits earlier than that. But they were hindered by a poor understanding of inheritance. For example, in the Bible, Jacob tried to breed speckled cattle by putting speckled rods in front of the cattle when they are trying to mate. Problems with understanding genetics works at a basic level was an issue even for much later and some of them still impact what are officially considered purebreds now.
I think that deliberate breeding of stronger horses dates back prior to the 1700s, at least to the early Middle Ages, but I don't have a source for that.
Absolutely. Even the dog-breeding practitioners were unaware of how inheritence operates; that didn't come about until Gregor Mendel. We really do take for granted the vast sums of understanding about the topic we are inculcated with simply through cultural osmosis.
100 years is nothing in the evolution of a civilization though. The time between agricultural revolution and the discovery of evolution is not a typical period in the history of humanity.
I wouldn't say it has much preciousness to begin with. It is is nearly nonsensical cheering. The sort of thing I don't like to associate myself with at all.
I wonder if there's any way to estimate how hard it is for an intelligent species to think of evolution. It's a very abstract theory, and I think it's plausible that intelligent species could be significantly better or worse than we are at abstract thought. I have no idea where the middle of the bell curve (if it's a bell curve at all) would be.
I would actually think evolution a particularly poor choice.
If you want to pick one question to ask (and if we leave aside the obvious criterion of easy detectability from space) then you would want to pick one strongly connected in the dependency graph. Heavier than air flight, digital computers, nuclear energy, the expansion of the universe, the genetic code, are all good candidates. You can't discover those without discovering a lot of other things first.
But Aristotle could in principle have figured out evolution. The prior probability of doing so at that early stage may be small, but I'll still bet evolution has a much larger variance in its discovery time than a lot of other things.
Genetic code might likely vary. While it isn't implausible that other life would use DNA for its genetic storage it doesn't seem to be that likely. It seems extremely unlikely that DNA would be organized in the same triplet codon system that life on Earth uses.
Heavier than air flight is also a function of what sort of planet you are on. If Earth had slightly weaker or stronger gravity the difficulty of this achievement would change a lot. Also if intelligent life had arose from winged species one could see this as impacting how much they study aerodynamics and the like. One could conceive of that going either way (say having a very intuitive understanding of how to fly but considering it to be incredibly difficult to make an Artificial Flyer, or the opposite, using that intuition to easily understand what would need to be done in some form.)
Other than that, your argument seems to be a good one.
This is a good one. I like it.
Seems dependent on substitute energy availability and military technology.
There seems to be significant variance in how much humans care about such things, and achievement depends significantly on interest. Would aliens care at all about this?
I think we would do quite poorly with any one such question and exponentially better if permitted a handful.
cringe. Please don't use "exponentially" to mean a lot when you have only two data points.
I mean we'd do more than twice as well with one question than with two, and more than twice as well with three than with two. Usually, diminishing returns leads us to learn less from each additional question, but not here. How do I express that?
I have zero data points, I'm comparing hypothetical situations in which I ask aliens one or more questions about their technology. (It seems Dawkins' scenario got inverted somewhere along the way, but I don't think that makes any difference.)
That's actually a claim of superexponential growth, but how you said it sounds ok. I'm actually not sure that you can get superexponential growth in a meaningful sense. If you have n bits of data you can't do better than having all n bits be completely independent. So if one is measuring information content in a Shannon sense one can't do better than exponential.
Edit: If this is what you want to say I'd say something like "As the number of questions asked goes up the information level increases exponentially" or use "superexponentially" if you mean that.
My best guess for each individual achievement gets better each other achievement I learn about, as they are not independent.
I was trying to get at the legitimacy of summarizing the aggregate of somewhat correlated achievements as a "level of civilization". Describing a civilization as having a a "low/medium/high/etc. level of civilization" in relation to others depends on either its technological advances being correlated similarly or establishing some subset of them as especially important. I don't think the latter can be done much, which leaves inquiring about the former.
If the aliens are sending interstellar ships to colonize nearby systems, have no biology or medicine, have no nuclear energy or chemical propulsion (they built a tower on their low gravity planet and launched a solar sail based craft from it with the equivalent of a slingshot for their space program), and have quantum computers, they don't have a level of technology.
Well what does no medicine mean? A lot of medicine would work fine without understanding genetics in detail. Blood donors and antibiotics are both examples. Also do normal computers not count as technology? Why not? Assume that we somehow interacted with an alien group that fit your description. Is there nothing we could learn from them? I think not. For one, they might have math that we don't have. They might have other technologies that we lack (for example, better superconductors). You may be buying into a narrative of technological levels that isn't necessarily justified. There are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense. For example, one-time pads arose in the late 19th century, but would have made sense as a useful system on telegraphs 20 or 30 years before. Another example are high-temperature superconductors. Similarly, high temperature superconductors (that is substances that are superconductors at liquid nitrogen temperatures) were discovered in the mid 1980s but the basic constructions could have been made twenty years before.
No blood donors (if they have blood), no antibiotics (if they have bacteria), etc.
Of course they do.
We could learn a lot from them, but it would be wrong to say "The aliens have a technological level less than ours", "The aliens have a technological level roughly equal to ours", "The aliens have a technological level greater than ours", or "The aliens have a technological level, for by technological levels we can most helpfully and meaningfully divide possible-civilizationspace".
My point is that there are a lot of examples of technologies that arose fairly late compared to when they necessarily made sense, so asking about what technologies have arisen isn't as informative as one might intuitively suspect. It's so uninformative that the idea of levels of technology is in danger of losing coherence as a concept absent confirmation from the alien society that we can analogize from our society to theirs, confirmation that requires multiple data points.
Ah, I see. Yes that makes sense. No substantial disagreement then.
I heard a Calculus teacher do this with even less justification a few days ago.
EDIT: was this downvoted for irrelevancy, or some other reason?
I didn't downvote it, but if you notice, JoshuaZ concluded my use of "exponential" was "ok", as what I actually meant was not "a lot" but rather what is technically known as "superexponential growth".
"Even less justification" has some harsh connotations.
Very much agreed.
I also agree with:
I agree with the general idea of:
though I think it is hard to correctly choose according to this criterion. I'm skeptical that digital computers would really pass this test. Considering the medium that we are all using to discuss this, we might be a bit biased in our views of their significance. (as a former chemist, I'm biased towards picking the periodic table - but I know I'm not making a neutral assessment here.)
Nuclear energy seems like a decent choice, from the dependency graph point of view. A civilization which is able to use either fission or fusion has to pass a couple of fairly stringent tests. To detect the relevant nuclear reactions in the first place, they need to detect Mev particles, which aren't things that everyday chemical or biological processes produce. To get either reaction to happen on a large scale, they must recognize and successfully separate isotopes, which is a significant technical accomplishment.
Is it possible the right isotopes might be lying around? Like here, but more concentrated and dispersed?
Yes, good point, if intelligent life evolved faster on their planet. The relevant timing is how long it took after the supernova that generated the uranium for the alien civilization to arise. (since that sets the 238U/235U ratio).
I'm confused. I thought a reaction needed a quantity of 235U in an area, and that smaller areas needed more 235U to sustain a chain reaction. Wouldn't very small pieces of relatively 235U rich uranium be fairly stable? One could then put them together with no technological requirements at all.
You are quite correct, small pieces of 235U are stable. The difference is that low concentrations of 235U in natural uranium (because of it's faster decay than 238U) make it harder to get to critical mass, even with chemically pure (but not isotopically pure) uranium. IIRC, reactor grade is around 5% 235U, while natural uranium is 0.7%. IIRC, pure natural uranium metal, at least by itself, doesn't have enough 235U to sustain a chain reaction, even in a large mass. (but I vaguely recall that the original reactor experiment with just the right spacing of uranium metal lumps and graphite moderator may have been natural uranium - I need to check this... (short of time right now)) (I'm still not quite sure - Chicago Pile-1 is documented here but the web page described the fuel as "uranium pellets". I think they mean natural uranium, in which case I withdraw my statement that isotope separation is a prerequisite for nuclear power.)
I think this is correct but finding a source which says that seems to be tough. However, Wikipedia does explicitly confirm that the successor to CP1 did initially use unenriched uranium.
Edit: This article (pdf) seems to confirm it. They couldn't even use pure uranium but had to use uranium oxide. No mention of any sort of enrichment is made.
What is natural is something that I, without background other than a history of nuclear weapons class for my history degree, was/am not confident wouldn't vary from solar system to solar system.
The natural reactor ended up with less U235 than normal, decayed uranium because some of the fuel had been spent. I assume that it began with either an unusual concentration of regular uranium (or other configuration of elements that slowed neutrons or otherwise facilitated a reaction) or that the uranium there was unusually rich in 235U. If it was the latter, I don't know the limits for how rich in 235U uranium could be at time of seeding into a planet, but no matter the richness, having small enough pieces would preserve it for future beings. Richness alone wouldn't cause a natural reaction, so to the extent richness can vary, it can make nuclear technology easy.
If the natural reactor had average uranium, and uranium on planets wouldn't be particularly more 235U rich than ours, then nuclear technology's ease would be dependent on life arising quickly relative to ours, but not fantastically so, as you say.
-- Jim Dator ("Dator's Law")
Strongly disagree with this quote. Some useful ideas about the future might seem ridiculous. But a lot won't. Lots of new technologies and improvements are due to steady fairly predictable improvement of existing technologies. It might be true that a lot of useful ideas or the most useful ideas have a high chance of appearing to be ridiculous. But even that means we're poorly calibrated about what is and is not reasonably doable. There's also a secondary issue that the many if not most of the ideas which seem ridiculous turn out to be about as ridiculous as they seemed if not more so (e.g. nuclear powered aircraft which might be doable but will remain ridiculous for the foreseeable future) and even plausible seeming technologies often turn out not to work (such as the flying car). Paleo Future is a really neat website which catalogs predictions about the future especially in the form of technologies that never quite made it or failed miserably or the like. The number of ideas which failed is striking.
If there is a useful idea about the future which triggers no ridiculous or improbable filters, doesn't that imply many people will have already accepted that idea, using it and removing the profit from knowing it? To make money, you need an edge; being able to find ignored gems in the 'possible ridiculous futures' sounds like a good strategy.
Not necessarily. For example, it could be that no one had thought of the idea in question but once someone thought of the idea the usefulness is immediately obvious.
Sure, but that implies a rather inefficient market - not even exploring major possibilities! Wouldn't work on Wall Street, I don't think.
An idea can still be useful even if everyone else knows about it too. Life isn't a zero-sum game.
Like this one?
Francis Bacon, The advancement of Learning and New Atlantis
David Hull, Science and Selection: Essays on Biological Evolution and the Philosophy of Science
This is the idea behind duel-N back, that the only strategy your lazy brain can implement to do better at the game is to increase the brain's working memory.
Keith Stanovich, What Intelligence Tests Miss
I'm not convinced. One very simple gain from
is the ability to consider more alternatives. These may be alternative explanations, designs, or courses of action. If I consider three alternatives where before I could only consider two, if the third one happens to be better than the other two, it is a real gain. This applies directly to the case of
I think it would take more than a day for people to get possible good effects of the change.
A better memory might enable people to realize that they have made the same mistake several times. More processing power might enable them to realize that they have better strategies in some parts of their lives than others, and explore bringing the better strategies into more areas.
Better memory and processing power would mean that probabilistically more businessmen would realize there are good business opportunities where they saw none before. Creating more jobs and a more efficient economy, not the same economy more quickly.
ER doctors can now spend more processing power on each patient that comes in. Out of their existing repertoire they would choose better treatments for the problem at hand then they would have otherwise. A better memory means that they would be more likely to remember every step on their checklist when prepping for surgery.
It is not uncommon for people to make stupid decisions with mild to dire consequences because they are pressed for time. Everyone now thinks faster and has more time to think. Few people are pressed for time. Fewer accidents happen. Better decisions are made on average.
There are problems which are not human vs human but are human vs reality. With increased memory and processing power humanity gains an advantage over reality.
By no means is increasing memory and processing power a sliver bullet but it seems considerably more then everything only moving "much more quickly!"
Edit: spelling
The potential problem with your speculation is that the relative reduction of the mandatory-work / cognitive-power ratio may be a strong incentive to increase individual work load (and maybe massive lay-offs). If we're reasonable, and use our cognitive power wisely, then you're right. But if we go the Hansonian Global Competition route, the Uber Doctor won't spend more time on each patient, but just as much time on more patients. There will be too much Doctors, and the worst third will do something else.
Possibly because people would be driving faster?
It's a nice list, but I think the core point strikes me as liable to be simply false. I forget who it was presenting this evidence - it might even have been James Miller, it was someone at the Winter Intelligence conference at FHI - but they looked at (1) the economic gains to countries with higher average IQ, (2) the average gains to individuals with higher IQ, and concluded that (3) people with high IQ create vast amounts of positive externality, much more than they capture as individuals, probably mostly in the form of countries with less stupid economic policies.
Maybe if we're literally talking about a pure speed and LTM pill that doesn't affect at all, say, capacity to keep things in short-term memory or the ability to maintain complex abstractions in working memory, i.e., a literal speed and disk space pill rather than an IQ pill.
Sounds plausible. If anybody finds the citation for this, please post it.
How about http://www.psychologicalscience.org/index.php/news/releases/are-the-wealthiest-countries-the-smartest-countries.html ?
Citing "Cognitive Capitalism: The impact of ability, mediated through science and economic freedom, on wealth". (PDF not immediately available in Google.)
EDIT: efm found the PDF: http://www.tu-chemnitz.de/hsw/psychologie/professuren/entwpsy/team/rindermann/publikationen/11PsychScience.pdf
Or http://www.nickbostrom.com/papers/converging.pdf :
EDITEDIT: high IQ predicts superior stock market investing even after the obvious controls. High IQ types are also more likely to trust the stock market enough to participate more in it
This is related, but not the research talked about. The Terman Project apparently found that the very highest IQ cohort had many more patents than the lower cohorts, but this did not show up as massively increased lifetime income.
http://infoproc.blogspot.com/2011/04/earnings-effects-of-personality.html
Unless we want to assume those 4x extra patents were extremely worthless, or that the less smart groups were generating positive externalities in some other mechanism, this would seem to imply that the smartest were not capturing anywhere near the value they were creating - and hence were generating significant positive externalities.
EDIT: Jones 2011 argues much the same thing - economic returns to IQ are so low because so much of it is being lost to positive externalities.
Absolutely - IQ is very important, especially in aggregate. And yet, I'd still bet that the next day people will just be moving faster.
I think its worth making the distinction between having hardware which can support complex abstractions and actually having good decision making software in there. Although it'd be foolish to ignore the former because it tends to lead to the latter, it seems to be the latter that is more directly important.
That, and the fact that people can generally support better software than they pick up on their own is what makes our goal here doable.
Sounds implausible to me, so I'm very interested in a citation (or pointers to similar material). If true, I'm going to have to do a lot of re-thinking.
How did they establish that economic gains are influenced by average IQ, rather than both being influenced by some other factor?
Perhaps IQ correlates weakly with intelligence. If their are lots of people with high IQ, their are probably lots of intelligent people, but they're not necessarily the same people. Hence, the countries with high IQ do well, but not the people.
I think you really need to see this google tech talk by Steven Hsu.
If this is true, it would affect my decisions about whether and how to have children. So I'd really like to see the source if you can figure out what it was.
James Miller says:
That's helpful; thanks.
But naturally doing everything faster would be pretty freaking awesome in itself.
But I'm having way to much fun nitpicking so I'll just stop here. :)
Don't confuse time-to-solution with correctness. Speed and the amount of facts at hand will not give you a good result if your fundamental assumptions (aka your algorithm) is wrong.
You cannot make up in quantity what you lose on each transaction, as the dot-com folks proved repeatedly.
-- Robert H. Thouless
...Unless your decision makes things worse.
(Only in the sense of constructing some plan of action (or inaction) that currently seems no worse than others, not in the sense of deciding to believe things you have no grounds for believing. "Make up your mind" is a bad phrase because of this equivocation.)
-- George Bernard Shaw
(Thanks to gwern for this one.)
Quite literally, in fact.
Whoops. I found it on gwern's website. Guess I should've done the next (in retrospect) most obvious thing. Sorry about that!
ETA: Feel free to vote me back down if you wish.
-- Paul Graham
There is actually a pre-split thread about this essay on Overcoming Bias, and the notion of "Keep Your Identity Small" has come up repeatedly since then.
And of course "Cached Selves", and especially this comment on that post.
-- Bill Moyers, introduction to The Power of Myth
Sorry, I don't understand what this quote is trying to say. I've attempted to parse it and can sort get some sort of thing about not caring what the truth is. If that's the meaning then it seems to be pretty anti-rationalist. What am I missing?
The marvelous irony of Joseph Campbell is that he was a world-renowned mythologist and expert on religion... but basically an atheist materialist. I interpret the quote as saying: "As our ways of knowing grow more accurate, we are more likely to produce undeniable truths that benefit all human beings."
More from the same introduction:
I found Campbell's The Hero with a Thousand Faces not very convincing. The similarities he sees between folk stories are often rather trivial, I think, and the rubbery nature of human language makes it easy -- not even mentioning selection bias.
Is The Power of Myth better?
Probably valid. What's an example of a non-trivial similarity in folk stories?
My knowledge of Campbell's work is limited to my having watched Moyers' interviews with him:
Part 1 Part 2 Part 3 Part 4 Part 5 Part 6
I wonder what he would think of the possibility of "editing" human nature via technology, and how those changes might negate the usefulness of mythology as a set of teaching memes.
Greg Egan's short story "The Planck Dive" has an interesting take on that subject. It's about a mythologist trying to force a description of a post-Singularity scientific expedition into one of the classic mythical narratives.
It's not "post-Singularity", it's normal human technology, just more advanced.
I guess you could say that. I said "post-Singularity" because all the characters are uploads, but there aren't any AGIs and human nature isn't unrecognizably different.
An example of a well-known non-trivial similarity would be the flood-myths that many cultures have -- it seems that least some of those myths are related somehow - but not in inherited psycho-analytical way (!) that Campbell suspects, but more likely simply due to copying the stories (e.g. Noah, Gilgamesh).
-Joseph A. Schumpeter, Capitalism, Socialism, and Democracy
In other words, politics is the mind killer.
Megan McArdle
Related SMBC.
reminds me of:
"I know that you believe you understand what you think I said, but I'm not sure you realize that what you heard is not what I meant." --Robert McCloskey
-- Robert Nozick (The Nature of Rationality)
Douglas Kenrick
Locke
I disagree. A lot of human conducts that I find virtuous, such as compassion or tolerance, have no immediate connection with the truth, and sometimes they are best served with white lies.
For example, all the LGBTQ propaganda spoken at doubting conservatives, about how people are either born gay or they aren't, and how modern culture totally doesn't make young people bisexual, no sir. We're quite innocent, human sexuality is set in stone, you see. Do you really wish to hurt your child for what they always were? What is this "queer agenda" you're speaking about?
Tee-hee :D
I can't tell if you're joking...
Dead serious actually. Well, what I mean is that a heteronormative approach where everyone must be either 6 or 1 on the Kinsey scale is hard to maintain in the modern world, and that when some extremely irrational older folks hate to see how young people can, for the first time in history, 1)discover their sexuality with some precision by using media and freely experimenting and 2)get a lot of happiness that way, it's fine to spin a clean and simple tale of the subject matter to those sorry individuals.
... I like the way you talk. This goes a long way into explaining the same person saying "homosexuality is not a choice" and "I have been with qute a few straight guys", as well as the treatment bi people get as "fence-sitters" and the resentment they generate by having an easier time in the closet.
-Sam Harris
What about, I dunno, the protestant reformation, where people were persecuted for wanting, among other things, to read the bible themselves rather than have it interpreted for them by the priesthood?
What does it mean for a society to suffer?
Robert Wright, The Moral Animal
-- Dr. Dre, "The Watcher"
-HL Menken
From an evolutionary perspective, I would have to disagree. Believing that one's children are supremely cute; that one's spouse is one's soulmate; or even that an Almighty Being wants you to be fruitful and multiply -- these are all beliefs which are a bit shaky on rationalist grounds but which arguably increase the reproductive fitness in the individuals and groups who hold them.
-- Sergey Dovlatov
(translation is mine; can you propose a better translation from Russian?)
-- Jeffrey Lewis, If Life Exists, which is really about set point happiness
~ William Johnson Cory
-- Planet Sheen
-Whitbreads Fyunch(click), by Larry Niven & Jerry Pournelle in "The Mote in God's Eye".
This doesn't really comment that Whitbreads may have incomplete evidence, facts, bias or his own aims.
For me it runs more along the lines of Aumann`s agreement theorem.
-- Oliver Cromwell
This has been mentioned in a few places on LW before (e.g. here) although I don't know if it has been in a quotes thread.
-- Russian proverb
I'm Russian, and I don't think I've heard this proverb before. What does it sound like in Russian ? Just curious.
http://masterrussian.net/f13/old-russian-proverb-10675/
It's a rather lousy translation of the proverb, the more close variant of which than that above is mentioned in Vladimir Dahl's famous collection of russian proverbs: Церковь близко, да ходить склизко, а кабак далеконько, да хожу потихоньку.
Can you provide a better translation?
Ahh, yes, thank you ! I didn't even recognize the proverb in English, but I doubt that I myself could translate it any better...
I'm not sure. I came across it in translated form without sourcing.
--The Lion King opening song
Do you consider this a promotion of fun theory? Or a justification for living forever?
Both.
Can also be an indication that everything is more than one person/mind can handle. By stepping into the sun, we enjoy the warmth and may be overwhelmed by the world as we see it. The song's lyrics seem cautionary, indicating that despite the warmth of being in the world do not attempt to see everything, do not attempt to do everything? This is rational, there are things we may not enjoy as much as others. To reduce our overall enjoyment by not placing parameters on our activities would be irrational in my opinion.
I will submit (separately) three quotations from my favorite philosopher, C.S. Peirce:
-- C.S. Peirce
-- C.S. Peirce
-- C.S. Peirce
-- Richard P. Feynman
And oldy but goody.
-Richard Feynman
Frank Schaeffer
Beware the fallacy of grey.
I think I prefer Nietzsche's version...
-- Henry David Thoreau
(Though if a thousand people tried striking at the root at once they'd undoubtedly end up striking each other. (I wish there was something I could read that non-syncretically worked out analogies between algorithmic information theory and game-theory/microeconomics.))
That sounds awfully negative and I can't see any basis for it apart from negativity. ie: For what basis do you declare that people striking the root are any more likely to strike each other than striking the branches?
While you might use the analogy to declare that the root of the problem is smaller, please note that there are trees (like Giant sequoias ) which have root systems that far outdistance the branch width.
If you picture the metaphorical great oak of malignancy with branches tens of yards in radius, and a trunk with roots (at the top of the trunk) only about 10 feet in diameter, you face one of those square of the distance problems in terms of axe swinging space.
This is what happens when you take the comments of romantic goofballs and slam them up against ontological rationalists who just might be borderline aspies or shadow autists.
I guess I should point out for the sake of clarity that the romantic goofball has not yet posted on this thread, and given the advanced interaction with entropy is unlikely to do so. Unless the Hindus, Buddhists and a few others are more accurate than the Catholics and Atheists.
--Haruki Murakami, Kafka on the Shore, 2006, p. 255
Not if Western society is anything to go by. Not asking (but knowing the answer) produces a lifetime's worth of successes, as far as I can tell.
— John Derbyshire
-- Lewis Carrol, "Alice's Adventures in Wonderland"
Hard to believe that it hasn't show up here before...
--Nicholas Epley, "Blackwell Handbook of Judgment and Decision Making"
You don't have to put the little '>' signs in on every line, just the beginning of a paragraph.
Fixed. Thanks.
Would this count as doing something deliberately complicated to throw off anyone with an Occam prior?
-- HL Mencken
I disagree, especially with the second part. For a trivial example, take the traditional refutation of Kantianism: You are hiding Jews in your house during WWII. A Nazi shows up and asks if you are hiding any Jews.
I'm going to have to call you on this one, in your trivial example you are intending harm/chaos/diversion to/to/of the Nazi plan. Causing disruption to another is vicious, even if you are being virtuous in your choice to disrupt.
Causing disruption is certainly vicious in the sense of aggressive or violent, yes. I, and apparently Normal_Anomaly, read the quote from Mencken as meaning that lying is vicious in the sense of immoral, 'vice-ious', and hence unjustifiable.
This is quoted already on this page albeit with "no matter" substituted for "however".
James Clerk Maxwell
I am having difficulty parsing this. The easiest interpretation to make of the first part seems to be "There are no laws of matter except the ones we make up," and the second part is saying either "minds are subject to physics" or something I don't follow at all.
I interpret the first part as saying that there are no laws of matter other than ones our minds are forced to posit (forced over many generations of constantly improving our models). And the second part is something like "minds are subject [only] to physics", as you said. The second part explains how and why the first part works.
Together, I interpret them as suggesting a reductive physicalist interpretation of mind (in the 19th century!) according to which our law-making is not only about the universe but is itself the universe (or a small piece thereof) operating according to those same laws (or other, deeper laws we have yet to discover).
Richard P. Feynman
Yitz Herstein
Banach, in a 1957 letter to Ulam.
Henri L. Bergson -- The Creative Mind: An Introduction to Metaphysics, p. 218
ETA: retracted. I posted this on the basis of my interpretation of the first sentence, but the rest of the quote makes clear that my interpretation of the first sentence was incorrect, and I don't believe it belongs in a rationality quotes page anymore.
-- Mary Everest Boole
-- Scott Aaronson, Quantum Computing Since Democritus (http://www.scottaaronson.com/democritus/lec14.html)
Reversed Stupidity?
I don't think so. In this context, it seems that Scott is talking about in this context making his mathematical intuitions more precise by trying to state explicitly what is wrong with the idea. He seems to generally be doing this in response to comments by other people sort of in his field (comp sci) or connected to his field (physics and math ) so he isn't really trying to reverse stupidity.
Seems more like harnessing motivated cognition, so long as opposite arguments aren't privileged as counterarguments.
Reversed stupidity isn't intelligence, but it's not a bad place to start.
It is a bad place to start. The intended sense of "reversed" in "reversed stupidity" is that you pick the opposite, as opposed to retracting the decisions that led to privileging the stupid choice. The opposite of what is stupid is as arbitrary as the stupid thing itself, if you have considerably more than two options.
People come up with ideas that are clearly and manifestly wrong when they're confused about the reality. In some cases, this is just personal ignorance, and if you ask the right people they will be able to give you a solid, complete explanation that isn't confused at all (evolution being a highly available example.)
On the other hand, they may be confused because nobody's map reflects that part of the territory clearly enough to set them straight, so their confusion points out a place where we have more to learn.
It points to where the ripe bananas are, huh? Thanks, that was clarifying.
Is that the case?
The majority dreams about a "just society", the minority dreams about a better one through technological advances. No matter there was 20th century when "socialism" brought us nothing and the technology brought us everything.
Echoing a utopian meme is analogous to stamping an instance of an invention, not to inventing something anew. It is inventors of utopian dreams that I doubt to be more numerous than inventors of technology.
You may be right here. Utopias are usually also quite uninnovative. "All people will be brothers and sisters with enough to eat and Bible (or something else stupid) reading in a community house every night".
Variations are not that great.
And let's not forget how many millions of patents there are; I don't think there are that many millions of utopias, even if we let them differ as little as patents can differ.
Be fair. We tried socialism once (in several places, but with minor variations). We tried a lot of technology, including long before the 20th century.
I feel obliged to point out that Socialdemocracy is working quite well in Europe and elsewhere and we owe it, among other stuff, free universal health care and paid vacations. Those count as "hidden potentiality of the real." Which brings us to the following point: what's , a priori, the difference between "hidden potentiality of the real" and "unreal"? Because if it's "stuff that's actually been made", then I could tell you, as an engineer, of the absolutely staggering amount of bullshit patents we get to prove are bullshit everyday. You'd be amazed how many idiots are still trying to build Perpetual Motion Machines. But you've got one thing right: we do owe technology everything, the same way everyone ows their parents everything. Doesn't mean they get all the merit.
--Friedrich Nietzsche, The Birth of Tragedy (1872); cf. "Intellectual Hipsters and Meta-Contrarianism"
-- Aleister Crowley
I recently contemplated learning to play chess better (not to make an attempt at mastery, but to improve enough so I wasn't so embarassed about how bad I was).
Most of my motivation for this was an odd signalling mechanism: People think of me as a smart person, and they think of smart people as people who are good at chess, and they are thus disappointed with me when it turns out I am not.
But in the process of learning, I realized something else: I dislike chess, as compared to say, Magic the Gathering, because chess is PURE strategy, whereas Magic or StarCraft have splashy images and/or luck that provides periodic dopamine rushes. Chess only is mentally rewarding for me at two moments: when I capture an enemy piece, or when I win. I'm not good enough to win against anyone who plays chess remotely seriously, so when I get frustrated, I just go capturing enemy pieces even though it's a bad play, so I can at least feel good about knocking over an enemy bishop.
What I found most significant, though, was the realization that this fundamental not enjoying the process of thinking out chess strategies gave me some level of empathy for people who, in general, don't like to think. (This is most non-nerds, as far as I can tell). Thinking about chess is physically stressful for me, whereas thinking about other kinds of abstract problems is fun and rewarding purely for its own sake.
My issue with chess is that the skills are non-transferable. As far as I can tell the main difference between good and bad players is memorisation of moves and strategies, which I don't find very interesting and can't be transferred to other more important areas of life. Whereas other games where tactics and reaction to situation is more important can have benefits in other areas.
This is an awesome quote that captures an important truth, the opposite of which is also an important truth :-) If I were choosing a vocation by the way its practicioners look and dress, I would never take up math or programming! And given how many people on LW are non-neurotypical, I probably wouldn't join LW either. The desire to look cool is a legitimate desire that can help you a lot in life, so by all means go join clubs whose members look cool so it rubs off on you, but also don't neglect clubs that can help you in other ways.
-- Raymond Terrific
Plato, Philebus
G.K. Chesterton
If I were a jelly fish,
Ya ha deedle deedle, bubba bubba deedle deedle dum.
All day long I'd biddy biddy bum.
If I were a jelly fish.
I wouldn't have to work hard.
Ya ha deedle deedle, bubba bubba deedle deedle dum.
I prefer if I were a deep one.
(If you aren't familiar with this song I strongly recommend one looks at all of Shoggoth on the Roof.)
A gentle introduction to the mythos.
Gary Marcus, Kluge
Relevant to deathism and many other things