Rationality Quotes: November 2010
A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (354)
--Futurama
-- The Jesus Seminar
(Developed in the context of biblical interpretation, of course. But despite my nontheism, I've found the principle behind it to be widely applicable.)
[I have had cause to apply this one recently. It particularly resonated to see it in the book just now.]
For the benighted Yanks among us:
(The secondary Wiktionary definition doesn't seem to convey the depth of anger that Urban Dictionary and various citations I looked at did.)
This seems to be missing, at minimum, some punctuation.
Edit: Moot.
Ellipses eaten by cut'n'paste. Fixed. Thank you :-)
The Mexican Drug War in One Lesson: Know Your Zetas!
--Henry Wadsworth Longfellow, Kavanagh ch. 1
Related: correspondence bias.
Scooping the Loop Snooper
an elementary proof of the undecidability of the halting problem
by Geoffrey Pullum
I came across this yesterday. The blog might also be worth a look, see for example 'A Brief History of Grammar'.
From Brandon Sanderson's Mistborn series:
"Certainly, my situation is unique," Sazed said. "I would say that I arrived at it because of belief."
"Belief?" Vin asked.
"Yes," Sazed said. "Tell me, Mistress, what is it that you believe?"
Vin frowned. "What kind of question is that?"
"The most important kind, I think."
Vin thought, then shrugged. "I don't know what I believe."
"People often say that, but I find it is rarely true."
For the young who want to by Marge Piercy
The poem is mostly about not being recognized as having a magical ability to do things until after you've succeeded. I'm just posting the link because it's more trouble than it's worth to make the line breaks show up properly.
-- French Ninja, Freefall
Puts me in mind of "Rationalists should win".
-- benelliott (edited to attribute)
Erwin Rommel, The Rommel Papers (1982) edited by Basil Henry Liddell Hart http://en.wikiquote.org/wiki/Erwin_Rommel#Sourced
This reminds me of the phrase "nobody learns faster than someone who is being shot at". Considering all the technological research done in war time, there seems to be a good point about motivation.
"All of us, grave or light, get our thoughts entangled in metaphors, and act fatally on the strength of them."
---George Eliot, "Middlemarch"
Somebody else read the comments section in Sapolsky's New York Times op ed today.
His column had a rough explanation of human oddities explained as evolutionary adaptations.
link
(If you sort the comments by largest approval rating there are several interesting ones.)
Sunday in the Park with George, by Stephen Sondheim
Umberto Eco, Foucault's Pendulum
--mechanical_fish on Hacker News. Emphasis mine. source
-- attr. Albert Einstein
--The Melancholy of Haruhi Suzumiya, vol 1
Rationality?
-Pantene Pro-V hair care bottle
-- Gordon Freeman, kind of
Another "rationality quote" from the same video:
— Randall Munroe, xkcd – Mutual
That comic gets bonus points for nice use of Hofstadter-ian strange loop.
"But building your life's explanations around science isn't a profession. It is, at its core, an emotional contract, an agreement to only derive comfort from rationality."
-Robert Sapolsky, in a essay reply to "Does science make belief in God obsolete?"
Rationality quotes: very many from @BadDalaiLama on Twitter.
(Edit: there's also this handy archive.)
This one felt quite LW-relevant:
It's good to be reminded now and then that dollars are not, in fact, utilons.
The natural logarithm of dollars is a pretty good approximation of utilons, assuming you like candy-bars.
Here's some evidence from Stevenson & Wolfers that happiness/life satisfaction is proportional to the log of income: blog post, pdf article.
How does ln(dollars) approximate utilions? It's obvious that utilions are generally not fully linear in dollars, and they're certainly not equivalent, but how does the log of dollars, specifically, approximate utility?
If there is some mathematical reason why, I would love to know. I was going off the observation that the natural logarithm approximates the kind of diminishing returns that economists generally agree applies to the utility of wealth. This means that, very roughly, the logarithm of dollars is the 'revealed preference' utility.
It was actually more of a joke about that assumption, because it suggests that a 50 dollar meal is preferred four times as much to a 3 dollar candy bar - a bit odd, but perfectly natural if you like candy bars.
Oooh, okay. Diminishing returns, certainly. Just not obvious that it would be "log" or near that.
:)
Well, log does that. But so does square root also. Lots of functions have diminishing marginal returns.
I can think of two good reasons to model diminishing returns with the natural log.
Logs produce nice units in the regression coefficients. A log-lin function (that is -- log'd dependent, linear independent) says that a percent increase in X results in a <coefficient> unit increase in Y. Similar statements are true for lin-log and log-log, the latter of which produces elasticities.
y=ln(x) and y=sqrt(x) will both fit data in a similar manner, so it makes sense to go with the one that makes for easy interpretation.
Additionally, the natural log frequently shows up in financial economics, most prominently in continuous interest but also notably in returns, which seem to follow the log-normal distribution.
Of course, there's the problem with pathological behavior near 0.
Or the utility of money could quite reasonably be bounded.
Hmm. If we grab some study data on wealth's mathematical relationship with utility, we might be able to decide what function best approximates it. As it is, yeah, there is no reason to prefer log to square root to anything other function.
With some constraints, of course.
Nice link. My favorite: In a democracy, the poor have the same power as the rich, but the rich can buy advertising, which the poor are suckers for.
Talmud, Avoda Zara 54b
Deuteronomy 18:20-22
~ Robert M. Pirsig
Now the actual quote's out of the way, here's my version: when one person suffers from a delusion it is called insanity; when many people suffer from a delusion it is called society.
Or friendship, or marriage, or all kinds of other things.
half of being smart is knowing what you're dumb at
solomon short (david gerrold's fictional character)
Overall, however, we've done better by avoiding dragons than by slaying them. -Warren E. Buffett
If you can't tell whose side someone is on, they are not on yours. -Warren E. Buffett
If after 1/2 hr of poker you can't tell who's the patsy, it's you. - Charles T. Munger
Wouldn't this be a problem for tit for tat players going up against other tit for tat players (but not knowing the strategy of their opponent)?
Only if it's common knowledge that both players are human.
ETA: Since I got downvoted, maybe I wasn't being clear. I think that the Warren Buffett quote applies to human psychology more than to game theory in general. If outright deception were easy, it would probably become a good strategy to keep your allies in some doubt about your intentions, as a bargaining chip. But we humans don't seem to be good at pulling that off, and so ambivalence is a strong signal of opposition.
Now that you have clarified, I wish I could downvote a second time.
Tit-for-tat is a good strategy in the iterated prisoner's dilemma regardless of whether the players are human and regardless of whether the other player is "on your side". In fact, it is pretty much taken for granted that there are no sides in the PD. Dre was downvoted by me for a complete misunderstanding of how Tit-for-tat relates to "sides". You were downvoted for continuing the confusion.
Oh, you're right- my response would have made sense talking about players in a one-shot PD with communication beforehand, but it's a non sequitur to Dre's mistaken comment. Don't know how I missed that.
Upvoted, but even with communication beforehand, the rational move in a one-shot PD is to defect. Unless there is some way to make binding commitments, or unless there is some kind of weird acausal influence connecting the players. Regardless of whether the other player is human and rational, or silicon and dumb as a rock.
Taboo "rational".
Acausal control is not something additional, it's structure that already exists in a system if you know where to look for it. And typically, it's everywhere, to some extent.
Highest-scoring move, adjective applied to the course that maximises fulfillment of desires.
The best move in a one-shot PD is to defect against a cooperator.
With no communication or precommitment, and with the knowledge that it is a one-shot PD, the overwhelming outcome is both defect. Adding communication to the mix creates a non-zero chance you can convince your opponent to cooperate - which increases the utility of defecting.
There is a question of what will actually happen, but also more relevant questions of what will happen if you do X, for various values of X. If you convince the opponent to cooperate, it's one thing, not related to the case of convincing your opponent to cooperate if you cooperate.
Determine what kinds of control influence your opponent, appear to also be influenced by the same, and then defect when they think you are forced into cooperating because they are forced into cooperating?
Is that a legitimate strategy, or am I misunderstanding what you mean by convincing your opponent to cooperate if you cooperate?
Perplexed, have you come across the decision theory posts here yet? You'll find them pretty interesting, I think.
LW Wiki for the Prisoner's Dilemma
LW Wiki for timeless decision theory (start with the posts- Eliezer's PDF is very long and spends more time justifying than explaining).
Essentially, this may be beyond the level of humans to implement, but there are decision theories for an AI which do strictly better than the usual causal decision theory, without being exploitable. Two of these would cooperate with each other on the PD, given a chance to communicate beforehand.
Yes, I have read them, and commented on them. Negatively, for the most part. If any of these ideas are ever published in the peer reviewed literature, I will be both surprised and eager to read more.
I think that you may have been misled by marketing hype. Even the proponents of those theories admit that they do not do strictly better (or at least as good) on all problems. They do better on some problems, and worse on others. Furthermore, sharing source code only provides a guarantee that the observed source is current if that source code cannot be changed. In other words, an AI that uses this technique to achieve commitment has also forsaken (at least temporarily) the option of learning from experience.
I am intrigued by the analogy between these acausal decision theories and the analysis of Hamilton's rule in evolutionary biology. Nevertheless, I am completely mystified as to the motivation that the SIAI has for pursuing these topics. If the objective is to get two AIs to cooperate with each other there are a plethora of ways to do that already well known in the game theory canon. An exchange of hostages, for example, is one obvious way to achieve mutual enforceable commitment. Why is there this fascination with the bizarre here? Why so little reference to the existing literature?
So far as I understand the situation, the SIAI is working on decision theory because they want to be able to create an AI that can be guaranteed not to modify its own decision function.
There are circumstances where CDT agents will self-modify to use a different decision theory (e.g. Parfit's Hitchhiker). If this happens (they believe), it will present a risk of goal-distortion, which is unFriendly.
Put another way: the objective isn't to get two AIs to cooperate, the objective is to make it so that an AI won't need to alter its decision function in order to cooperate with another AI. (Or any other theoretical bargaining partner.)
Does that make any sense? As a disclaimer, I definitely do not understand the issues here as well as the SIAI folks working on them.
Not to me. But a reference might repair that deficiency on my part.
I don't think that's quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
As far as I understand SIAI, the idea is that decision theory is the basis of their proposed AI architecture, and they think it's more promising than other AGI approaches and better suited to Friendliness content.
Do you have an example of a problem on which CDT or EDT does better than TDT?
I have yet to see a description of TDT which allows me to calculate what TDT does on an arbitrary problem. But I do know that I have seen long lists from Eliezer of problems that TDT does not solve that he thinks it ought to be improved so as to solve.
Not necessarily. Various decision theories can come into play here. It depends precisely on what you mean by the prisoner's paradox. If you are playing a true one shot where you have no information about the entity in question then that might be true. But if you are playing a true one shot where you each before making the decision have each player have access to the other player's source code then defecting may not be the best solution. Some of the decision theory posts have discussed this. (Note that knowing each others' source code is not nearly as strong an assumption as it might seem since one common idea in game theory is to look at what game theory occurs when people know when the other players know your strategy. (I'm oversimplifying some technical details here. I don't fully understand all the issues. I'm not a game theorist. Add any other relevant disclaimers.))
No one on this thread has mentioned a "prisoner's paradox". We have been discussing the Prisoner's Dilemma, a well known and standard problem in game theory which involves two players who must decide without prior knowledge of the other player's decision.
A different problem in which neither player is actually making a decision, but instead is controlled by a deterministic algorithm, and in which both players, by looking at source, are able to know the other's decision in advance, is certainly an interesting puzzle to consider, but it has next to nothing in common with the Prisoner's Dilemma besides a payoff matrix.
Prisoner's paradox is another term for the prisoner's dilemma. See for example this Wikipedia redirect. You may want to reread what I wrote in that light. (Although there's some weird bit of illusion of transparency going on here in that part of me has a lot of trouble understanding how someone wouldn't be able to tell from context that they were the same thing.)
No. The problem of what to do is actually closely related when one has systems which are able to understand each others source code. It is in fact related to the problem of iterating the problem.
In general, given no information, the problem still has relevant decision theoretic considerations.
I'm curious why you assert this. Game theorists have a half dozen or so standard simple one-shot two person games which they use to illustrate principles. PD is one, matching pennies is another, Battle of the Sexes, Chicken, ... the list is not that long.
They also have a handful of standard ways of taking a simple one-shot game and turning it into something else - iteration is one possibility, but you can also add signaling, bargaining with commitment, bargaining without commitment but with a correlated shared signal, evolution of strategies to an ESS, etc. I suppose that sharing source code can be considered yet another of these basic game transformations.
Now we have the assertion that for one (PD is the only one?) of these games, one (iteration is the only one?) of these transformations is closely related to this new code-sharing transformation. Why is this assertion made? Is there some kind of mathematical structure to this claimed relationship? Some kind of proof? Surely there is more evidence for this claimed relationship than just pointing out that both transformations yield the same prescription - "cooperate" - when there are only two possible prescriptions to choose among.
Is the code-sharing version of Chicken also closely related to the iterated version? How about Battle of the Sexes?
Sounds like one for the quotes page for "Default to Good" at TV Tropes. (Link omitted due to time hazard.)
A horse that can count to ten is a remarkable horse, not a remarkable mathematician.
--Samuel Johnson
Richard Feynman
Kołakowski's Law, or The Law of the Infinite Cornucopia:
Leszek Kołakowski
If you set out to beat a dog, you're sure to find a stick. -- Old Yiddish Proverb
I like it.
+1
del
That doesn't sound right.
To me, it seems like:
(Philosophy -> Science) and (Art -> Engineering).
"However insistently the blind may deny the existence of the sun, they cannot annihilate it. " - D. T. Suzuki
Humans are blind to all kinds of things. Radiation for one. But it still can be detected and controlled. A civilization of blind people would eventually build detecting equipment and learn to control the real world.
As the saying goes: "In a world of blind people, c would be the speed of heat.". But I don't think anyone believes that anyone actually denies the sun's existence.
Want to bet?
(At stakes of a few thousand galaxies worth of energy and negentropy. It's not going to be cheap to win this bet! I'm not too comfortable with the whole making myself blind thing either but I guess I can rectify that once I finish deploying the antimatter disruptor beam.)
You are mistaking the map for the territory. It doesn't matter if its a quark-pair or a hyper colossal cosmic structure gravitationally influencing everything in the universe; if a condition is present, then it has effect.
That settles it. I'm going to recruit Chewbacca as my Mook Luitenant. There can be no other choice.
The replies are better quotes than the original.
"Since the beginning of time, man has yearned to destroy the Sun."
I just noticed that I implicitly assumed that it would have to be me that blinded himself. What sort of nefarious sun destroying intergalactic mastermind would I be if did foist that role upon a henchman?
You're going to have trouble destroying the Sun if you don't believe it exists.
He only has to deny that it exists.
Alternatively, he could lock himself onto a sun-destroying path, and then forcibly do an unBayesian update away from the existence of the sun.
Alternately, he could interpret the sentence literally, note that 'not at all' is a level of insistence, deny the existence of the sun not at all, and then destroy it.
It is a profound and necessary truth that the deep things in science are not found because they are useful, they are found because it was possible to find them. -J. Robert Oppenheimer.
There are two quite different interpretations of this quote: it either says something about scientists, or something about scientific truths, and I'm not sure which is the intention.
The two messages I see are:
Scientists just enjoy seeking truths, you don't need to give them the incentive of practical applications in order for them to do science, so any truths that can be discovered will be, regardless of their usefulness.
There are an awful lot of true things. The ones that we know might not be the most useful, but they are the ones that happen to lie in the (extremely small?) subset of true things that humans are capable of understanding.
To an extent, I guess both of these are true... which one was Oppenheimer aiming at?
Quibble: Two things you might have missed:
<quote>There are an awful lot of true things. </quote>
I think that many of the things that are commonly regarded as being "true" are socially constructed fictions, biases and fallacies. Moreover science can never attain absolute truth it can only strive for it.
Hi stochastic, and welcome to Less Wrong!
This is actually a really important topic. I agree that there are a lot of cultural and normative claims that don't deserve to be called "true" or "false", despite their common usage as such. I'd be cautious of using the phrase "absolute truth", since it conjures up false expectations compared to the actual process of increasing confidence in models of the world.
Really relevant: The Simple Truth
P.S. Introduce yourself on the welcome page when you have a moment!
The quote syntax is
Which becomes
Profound, necessary and optimistic. :)
It is still an unending source of surprise for me how a few scribbles on a blackboard or on a piece of paper can change the course of human affairs. -Stanislaw Ulam
Can they really? I have my doubts. Most of those scribbles on a blackboard were either an inevitable result of outside forces or would have been made on a different blackboard had they they not been made there. (Although to be fair the butterfly and mere chance will play their part at least some of the time.)
He could have also been thinking about the Declaration of Independence, the Declaration of the Rights of Man, and various other documents. (I'd list the Magna Carta, but it didn't really have the effect it's credited with. It was a few lines in a larger document that was more concerned with the hunting privileges of nobles than with the rights of man, and that was nullified before the year was out.)
I think he had things like the development of physics in the 20th century that led to the creation of the A and H bombs. I got the quote from Richard Rhodes history of the making of the atomic bomb.
It doesn't matter exactly which blackboard or wrote wrote what, in the end, a bunch of people making calculations and experiments changed the course of human affairs pretty significantly.
Scribbles on maps, particularly in 1815 and 1919, had some largish effects.
The partitions of Korea and Vietnam are some more recent examples; nor have we seen the last of the largish effects of the former.
In 1923, England and France divided between them the previously Turkish territories of what are modern Syria, Lebanon and Israel/Palestine. They drew a pencil line on a map to mark the treaty border.
It turned out that the thickness of the pencil line itself was several hundred meters on the ground. In 1964, Israel fought a battle with Syria over that land.
People were killed because someone neglected to sharpen their pencil. That's "scribbles on a piece of paper" for you.
Ref: a book found by Google. I originally learned about this from an Israeli plaque at the Dan River preserve near the border.
I suppose it would be in bad taste to find that rather amusing. Or at least to admit it.
"The 350-mile detour in the Trans-Siberian Railway was caused by the Tsar, who drew the proposed route using a ruler with a notch in it." -- Not 1982
What's the source for this? Googling "Not 1982" is not helpful... I did find the following amusing quote though:
"The Trans-Siberian Railway". In The Living Age, seventh series volume five, 1899
I wonder if Nicholas was acting in the same spirit as King Canute and likewise has been subsequently misinterpreted. (I've seen the Canute story mentioned as an example of being power-mad.) Nicholas's intention could have been something like 'Gentlemen, you were chosen for your competence in engineering and expertise in dealing with such details; I have made my general wish known to you; kindly implement it and do not bother me with what is your job.'
http://en.wikipedia.org/wiki/Not_the_Nine_O%27Clock_News#Books_and_miscellaneous
My google-fu is strong-ish. Still, not a particularly reliable source.
In circumstances like that I find I have to laugh, if only to keep from weeping.
Charles Sanders Peirce
Ouch.
But that part about "the essence of his life gone with it" is an exaggeration - or at least, only a temporary loss. There are plenty of vague shadows of ideas out there to be loved and cherished.
-- T-Rex, Dinosaur Comics #539
I know this is well known, but to supplement the T-Rex:
-Alfréd Rényi/Paul Erdős
Yes, and don't forget the dual result that a comathematician is a device for turning cotheorems into ffee.
And a cat is a device for turning kibble into cuddle.
The course of human progress staggers like a drunk; its steps are quick and heavy but its mind is slow and blunt
-Jesse Michaels of Operation Ivy
Posted because it's a useful and evocative metaphor: the drunk feels himself leaning or falling in one direction, and puts his foot down in that direction to steady himself. If he doesn't step far enough, he is still leaning in the same direction, and he steps again. In this way we can make fantastic progress in directions we don't like while getting further away from the ways we did want to go.
I just came across this and thought it was a pretty funny dialogue: "Reality is that which does not go away upon reprogramming." (Check the first 4 comments here: Chatbot Debates Climate Change Deniers on Twitter so You Don’t Have to)
This is of course a paraphrase borrowed from Philip K. Dick's famous statement:
I shared this on another website and got this comment:
This has been done for a while. A few years ago there was some noise about a russian chatbot which impersonated a good-looking girl and tried to scam people to give personal information and/or money.
Every time it succeeded, it passed the turing test.
This is a bit long for a rationality quote and isn't really a quote but short enough and worth the read: The most poetic and convincing argument for striving for posthumanity (via aleph.se).
That's kind of depressing.
-- Talib Kweli (substitute "nature" for "God")
I don't think it would be a good idea to take a Carl Sagan quote and add a 'substitute "God" for "nature"' postscript. I don't think this is a good idea either.
Talib Kweli is nonreligious, so I'm not changing the meaning of the quotation. "God" is often used poetically. Example:
Albert Einstein
Even if Kweli were religious the point would not be to put words in his mouth, but to reapply a beautiful quotation to another context where it is meaningful.
reapplying it to another context changes the meaning. because of einstein's explicitly stated opinions on the meaning of God (and the Lord), we can understand his meaning to be synonymous with that of nature and its order.
"I believe in Spinoza's God who reveals himself in the orderly harmony of what exists, not in a God who concerns himself with the fates and actions of human beings."
"I do not believe in a personal God and I have never denied this but have expressed it clearly. If something is in me which can be called religious then it is the unbounded admiration for the structure of the world so far as our science can reveal it. " - 1936
Talib Kweli, on the other hand, hasn't given us a clear opinion of his thoughts on the term God. There is no evidence for us to assume that the meaning he gives to the term God would fit in the context of this quote.
There are more fools than knaves in the world, else the knaves would not have enough to live upon.
-Samuel Butler
— T.H. White (The Once and Future King)
There are exceptions... When a child first learns that he or she is mortal, I doubt that that is a happy day for him or her. Truths are valuable, but some are rather bitter.
Yes, and I think this is the one big crucial exception... That is the one bit of knowledge that is truly evil. The one datum that is unbearable torture on the mind.
In that sense, one could define an adult mind as a normal (child) mind poisoned by the knowledge-of-death toxin. The older the mind, the more extensive the damage.
Most of us might see it more as a catalyst than a poison, but I think that's insanity justifying itself. We're all walking around in a state of deep existential panic, and that makes us weaker than children.
Well, it's not the knowledge of death that's evil, it's the actual phenomenon -- there's not much point blaming the messenger for the bad news. Especially not now we're at the stage where we're beginning to have a chance to do something about it.
Ernest Becker agrees with you, but I always read the one star reviews first.
For myself, I've lost touch with Becker's ontology. I'm reduced to making the lame suggestion of playing Go in tournaments in order to practice managing a limited stock of time, such as 70 years.
Isaac Newton's argument for intelligent design:
-- Letter to Richard Bentley
Elements of this argument make an error related to numberplates. I'm surprised this was received so (+4) positively.
I thought it was obviously ironic, since planets do actually move in ellipses and general conic sections; Newton makes a falsifiable claim in favor of ID and it is clearly false.
Wait, something seems wrong here. Newton knew the planets moved in ellipses. Probable conclusion: He was just referring to the low eccentricity of these ellipses?
I think that the issue was the number of planets. If you had just one planet orbiting the sun, that orbit would be a nice stable one. But if you have multiple bodies orbiting the sun, their paths will interfere chaotically. I think that Newton expected that, in general, you would get wildly erratic orbits, with some planets being thrown clear of the system altogether. As I understand it, he expected such catastrophes to be inevitable, unless you started with a very carefully-selected initial state. God was then necessary to explain how the solar system started out in such an improbable state. But in fact Newton just lacked the mathematical sophistication to see that, according to his own theory, typical initial arrangements could result in systems that are stable for billions of years.
I voted up both Newton quotes because they show how a very smart man can make a very plausible argument which is nevertheless very wrong.
And the reason Newton failed to guess the rather simple explanation is that he observed a solar system that was stable and unchanging and assumed that it must always have been stable and unchanging since the creation. His "biases" just didn't allow him to imagine an evolutionary model of planet formation by accretion from a more-or-less random initial state.
Nowadays of course, we tend to invent evolutionary or historical explanations for everything. We don't even limit ourselves to explaining the origins. We go on to predict how things will likely come to a contingent historical end ... or should I refer to it as our next great adventure?
Second to this. The planets that remain in sequence and orbit survived the transition from entropy to stability in a way that didn't result in them being ejected or destroyed. Their presence represents them making it through the pachinko machine of amalgamated physical parameters, not intentional design.
Newton's inferences were like assuming a gold tooth has mystical properties because I've put you through a woodchipper and it's the only thing that came out the other end. There is so much to understand about the internals of the machine before you make any solid judgments about the inputs and outputs.
Numberplates?
"The chance that the numberplate of my first car was EIT411 is one in a whole lot. Wow! It happened! There must be a God!" (crudely speaking.)
This seems to be relevant to, for example, yabbering on about the exact speeds of Saturn et. al. The Saturns that were going the wrong speed all fell in to the sun (or cleared off into space.)
Oh... I in no way endorse the above argument! Pierre-Simon Laplace's, a century or so after Newton, gave a naturalistic model of how the Solar System could have developed. "Rationality quotes" is not only about sharing words of wisdom, but also words of folly.
:) I certainly wasn't intending to accuse you.
Here's another Newton ID quote. This one complements PeterS's because the true naturalistic explanation requires physics that was not implicit in Newton's mechanics.
—Isaac Newton, Four Letters From Sir Isaac Newton To Doctor Bentley Containing Some Arguments In Proof Of A Deity.
Dr. Manhattan (Watchmen)
Xenophanes
More likely they would write a treatise on how God wants them to keep pulling carts around.
There might be a strong chance that horses and other animals would draw their gods as having human form. Humans tend to protray their gods as being either equal or higher than humanity. Animist gods are protrayed as having characteristics that surpass humans: speed, wisdom, patience, etc. based on the characteristics of that animal. Alternately, sun gods, storm gods, etc.: higher powers.
Some wild horses would have horse gods or weather gods or wolf gods. Some might have human gods, depending on their interaction with humanity.
I'd imagine that domesticated horses would have human gods, some benevolent and some malignant, or both. And some domesticated horses would go "through the looking glass" and develop a horse-god of redemption, with prophecies of freeing them from the toil and slavery of domestication, based on some original downfall of horse-dom that led to them being subservient to humans.
Or something like that.
this should at the very least be turned into a short story.
I'm not sure this makes sense. Empirically many human cultures have deities that are shaped like animals.
Voted up. My quibble is that gods are often anthropomorphic in mind, if not in body.
Charles de Secondat, Baron de Montesquieu "If triangles had a god, he would have three sides." [Lettres Persanes, no 59]
Surely he would be circular?
Well, the Egyptians had animal-headed gods.
With human bodies.
Y.S. Abu-Mostafa, in explaining the VC inequality of PAC learning.
Dean Schlicter
John Archibald Wheeler
Buckminster Fuller
Francis Bacon
I'm of the mind that politically, in the US at least, we don't seem to learn from this. The truth is, indeed, revealed....but the confusion remains and the errors continue.
There are many who disagree with me about that...
but that's because they're confused AND in error.
(ok ok I kid on that last part...)
Duplicate, twice.
Use the search box to check any quotation you're thinking of posting.
Oops, I'm sorry. I must have forgotten to check that one first.
Rule I
Rule II
Rule III
Rule IV
Isaac Newton, Philosophiae naturalis: Rules of Reasoning in Philosophy
Related to: Politics, Protection
-- Ken Binmore, in Natural Justice, p56
Science works by scientists not doing all their thinking for themselves. That's also how it fails. Getting the balance right may be hard, but no-one has really tried very hard, so it may not be. Trying to do that is largely what I see SIAI as being about.
I'm not sure that's true. The issue isn't what a person "thinks"...it's what a person ultimately concludes. A scientist must think for itself in order to hypothesize, no? I think science goes wrong when scientists conclude for themselves, in the face of the actul facts on the matter.
I think what is being referenced above is how to separate information from who said it, and how.
Hmmm. A mathematician learning a new field thinks for himself, up to a point. Oh, he gets his ideas, theorems, and even proofs from the book, but he is supposed to verify the thinking for himself.
The same kind of thing applies to scientists. They get ideas, formulas, and even empirical data from other scientists, but they are supposed to verify the inferences and even some of the derivations themselves. At least in their own field. A neuroscientist using FMRI doesn't need to know the fine points of the portions of QED dealing with particle spins in a varying magnetic field. Nor the computer science involved in the image processing. But he does appreciate that these tools, whether he understands them in detail himself or not, are not based on tradition or authority, but instead draw their legitimacy from the work of his colleagues in those fields who definitely do think for themselves.
If the balance you seek to strike is the balance that lets you distinguish path-breaking innovation from crackpottery, I would suggest this: It is ok to try doing something that the experts think is impossible if you really understand why they are so pessimistic and you think you might understand why they are wrong.
I like the sentiment, but - instructed in Humean skepticism? Isn't that going overboard in the opposite direction?
Binmore is on something of a "Hume is God, Kant is Satan" kick in this book. Another quote I like deals with Binmore's efforts to comprehend the "categorical imperative":
I share much of Binmore's enthusiasm for Hume. I don't think that rationalists have much reason to dislike Hume's skepticism. Hume was a practical man, and his famous argument against induction is far from a counsel of epistemological despair. As for instructing the young to be skeptical of gods - well it may violate the US Constitution, but then so does gun control. ;)
Nonetheless, I suspect that many people here would not care much for this particular quote in its full context - starting a couple paragraphs before my quote and continuing a paragraph further.
-John Gray, Straw Dogs
~ Epicurus, Principal Doctrines
-Anthony de Jasay, Inspecting the Foundations of Liberalism
Conventions against torts like murder and theft are older than civilization. I think it is a safe bet they will still be around in a thousand years.
-Theodore Dreiser, The Titan
William Stanley Jevons, Theory of Political Economy, 1871: p.275-6
Not a quote about rationalism, but probably relevant to Less Wrong:
--Robert Graves
I read this one last week:
--"A Dog Was Crying Tonight in Wicklow Also", Seamus Heaney, pg 66, The Spirit Level (1996)
H.L. Mencken, Minority Report.
I wrote about this.
The idea is: I can criticize a plan that claims wonderful successes, even if I have no corresponding plan of my own. Maybe we don't know how to get wonderful successes at all. Maybe they're impossible. Maybe your reasoning is suspect.
I am not sure I get it.
A more direct paraphrasing would be, Just because I don't have all the answers doesn't mean that your answers are correct.
A concrete example: just because scientists don't currently know everything about how evolution happened, that doesn't mean that Young Earth Creationists are right. Typical YEC debating strategy is to look for gaps (real or imagined) in our current theories, and act as if that proves that God created the world in six days, and from the dust created every creeping thing that creepeth upon the earth, etc.
No, it speaks of remedy. It's not about beliefs about the world, but about courses of action, and there he's dead wrong - a course of action can only be bad by comparison to a better alternative.
But sometimes that better alternative is "let's wait and see". And that's what many people aren't willing to do.
"We must do something. This is something. Therefore, we must do this." is a fallacy. (The Politician's Syllogism.) Mencken's statement pretty clearly includes the course of action of not taking action; he's stating that any action is not necessarily better than no action, and that taking on any belief is not necessarily better than holding no belief.
We can call a course of action bad if doing nothing is a better alternative.
I don't think either of you are getting it right. I'm not familiar with the context of this particular quote, but knowing it's from Mencken, he's clearly referring to various idealistic busybodies and their grand (and typically disastrously unsound) plans to solve the world's problems. The quote is directed against idealists who assume moral high ground and scoff at those who question their designs.
Ah, so it's about whether a plan meets some absolute standard, rather than which plan is best, and the moral is that just because I don't know of a plan that meets standard X is no reason to think your plan will - in fact the reverse.
I think the absolute standard in question is the status quo. Will the proposed remedy make things worse? Mencken has no remedy of his own. In the first sentence he denies that this lack is evidence in favour of the proposition that somebody else's remedy will be an improvement on leaving things alone.
Basically, yes. For instance, the alcohol prohibitionists of Mencken's day were a prime example of the sort of people he targeted with this quote.
Charles H. Spurgeon