Rationality Quotes April 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
And one new rule:
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (656)
Source: http://www.prequeladventure.com/2014/05/3391/
thank you for posting this - now I have something new to read!
-- Tom Stoppard, The Real Thing
Attributed to Malcolm Forbes.
If it weren't for the ban on Robin Hanson quotes, the appropriate response would be too obvious..
That said, I really wish I lived in a world where that quotation was true.
Plutarch, "De Auditu" (On Listening), a chapter of his Moralia.
This essay is also the original source of the much-quoted line "The mind is not a pot to be filled, but a fire to be ignited." It is variously attributed, but is a fair distillation of the original passage, which comes directly before the quote above:
Scott Adams on consciously controlling your own moods and feelings
--Israel Gelfand, found here
Far it be for me to argue with Gelfand, but, having done some extensive tutoring, I think that sometimes the best way to "turn these peculiarities into advantages" is to direct the student to a more suitable career path. Face it, some people just naturally suck at math. Sure, they can be drilled to do well on high-school math exams, with many times the effort an average student spends on it (that's what Kumon is great at, drills upon more drills with a gradual progress toward System I-level mastery). But this is a waste of time and effort for everyone involved. Their time and effort is more productively spent on creative writing, dancing, debating or whatever else these "peculiarities" hint at. Math is no exception, of course, it gets all the attention as a hard course because of the unreasonably high requirements relative to other subjects.
I think you're right about the very general form of the quote. However, it still might be worth at least some teachers' time to look at how peculiarities might be advantages.
I'm never sure what to do with these kind of rationality quotes. On the one hand, they are obviously literally false, but on the other hand, they may be pushing against our biases in the right direction.
I'd say the obvious thing to do is comment to that effect. So far as karma is concerned, I have no strong opinion.
"The most amazing thing about philosophy is that even though no nobody knows to do it, and even though it has never achieved anything, it is still possible to do it really badly"
--Oolon Kaloophid
Is there missing context, or did a cat philosopher walk across your keyboard? Or is it meant to evoke "writing but really badly"?
Also: strongly disagree that "it has never achieved anything". See also, "successful philosophy stops being philosophy and becomes another science" (not an exact quote).
Or with smart people who profit at the state's expense when it rescues fools from their mistakes. If it's known that folly has no adverse results, people will take more risks.
While this is true, it may also be the case that humans in the default state don't take enough risks. Indeed, an inventor or entrepreneur bears all the costs of bankruptcy but captures only some of the benefits of a new business. By classical economic logic, then, risk-taking is a public good, and undersupplied. Which said, admittedly, not all risk-taking is created equal.
That's exactly wrong. Bankruptcy releases the entrepreneur from his obligations and transfers the costs to his creditors.
Not to say that the bankruptcy is painless, but its purpose is precisely to lessen the consequences of failure.
The inventor is still bearing the costs of the bankruptcy. The creditors are bearing (some of) the costs of the failure, which is not the same thing.
This premise doesn't seem true (for all that the conclusion is accurate). Our entire notion of bankruptcy serves the purpose of putting limits on the cost of those risks, transferring burden onto creditors. An example of an alternate cultural construct that come closer to making the entrepreneur bear all the costs of the risk is debt slavery. Others include various forms of formal or informal corporal or capital punishments applied to those that cannot pay their debts.
That seems right, and it also seems as though the opposite is sometimes right. If a company knows it can reap the benefits of operations (e.g., of product sales) without bearing the cost of those risks associated with its operations (e.g., of pollution), is this a case of risk-taking being oversupplied?
Pollution does not seem particularly well described by risk or risk-taking; it basically a certainty with industrial operations.
In the same way that "product sales" was intended to refer to the result (income), "pollution" was intended to refer to the result (health problems, etc.). While one might think that some result is basically a certainty, the scope and degree of real problems is frequently uncertain. An entrepreneur who weighs potential public health risks does not seem any more difficult to imagine than one who weighs potential bankruptcy risks.
At any rate, pollution is merely an example; you can take any other example you find more suitable.
It has come to be accepted practice in introducing new physical quantities that they shall be regarded as defined by the series of measuring operations and calculations of which they are the result. Those who associate with the result a mental picture of some entity disporting itself in a metaphysical realm of existence do so at their own risk; physics can accept no responsibility for this embellishment.
Sir Arthur Eddington, 1939, The Philosophy of Physical Science
"Many who are self-taught far excel the doctors, masters and bachelors of the most renowned universities" Ludwig Von Mises
Many as an absolute number, or many as a fraction of all self-taught people? I'd agree with the former but not with the latter. IME most self-taught people end up with gross misconceptions because of this.
Absolute number. The point of the statement is not the word "many", but rather the rest of the statement. It's sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning. But yeah, the reference to the double illusion is spot on and is definitely a kink that has to be ironed out with effort and testing.
Oft discussed here and is shown to be empirically wrong in math and physics (if you define "excel" as "make notable discoveries"). Probably also wrong in comp. sci., chem and to a lesser degree in engineering. It might still be true in some nascent areas where one does not need 10 years of intense studying to get to the leading edge.
There is one good example of an unschooled mathematician:Ramanujan. The lack of need for special equipment in maths probably has something to do with it.
Yes, he is definitely an exception. Unfortunately, I cannot think of anyone else in the last 100 years. Possibly because these days anyone brilliant like that ends up in the system. Which is a good thing, if true.
That sounds like a list of non-diseased disciplines. Is this by chance? Alternatively, it's the STEM subjects. Same thing?
On the other hand, if "excel" is "do well in life" then, I don't know. But that is the reading that the original context of the quote suggests to me:
Also an interesting view of education. One of the ancients said that the mind is not a pot to be filled but a fire to be ignited(1), and nobler teachers see the aim of their profession as the igniting of that fire in their students. However, Mises appears to take the view that this is impossible (he does not limit his criticism of education to any time and place), that teaching cannot be anything but the filling of a pot, and the igniting of the fire can come only from the inner qualities of the individual, incapable of being influenced from outside.
(1) As usually quoted. I've just added the original source of this to the quotes thread.
One of the more popular ideals of education is summarized in this quote from Malcolm Forbes:
Hmm, probably deserves a top-level comment. Anyway, the reality is that some people are happy with imitations, while others strive for creativity:
So good education is beneficial to creative types, as well, since to defy something or to add to something, you have to learn that something first.
A bit harsh, given that many people are at least a little bit creative.
Not sure if this is Mises' opinion or what he argues against, but, again, seems a bit harsh. There are always the outliers, but for the majority of people this "igniting" is a combination of nature and nurture.
Some numbers would be useful there.
Numbers would be kind of a nit-pick I would think. The point of the statement is not the word "many", but rather the rest of the statement. It's sort of an attempt to break the spell that a large amount of money and a fancy college is required for real learning.
Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn" (are indoctrinated with).
I'm strongly in agreement with libertarian investment researcher Doug Casey's comments on education. I also agree that the average indoctrinated idiot or 'pseudo-intellectual" is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to "lowest-common-denominator" pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry.
Of course, I think that people who totally "self-start" have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy and other MOOCs are now eliminating even that necessity. Generally, this argument is that "It's a young man's world." This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated "ultra-intelligences" know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally, there will be a "core skill set" that all human+ sentiences have, at that time, but there will still be specialization for project-oriented work, due to specifics of a complex situation.
For the foreseeable future, the world will likely become a more and more dangerous place, until either the human race is efficiently rubbed out by military AGI (and we all find out what it's like to be on the receiving end of systemic oppression, such as being a Jew in Hitler's Germany, or a Native American at Wounded Knee), or there becomes a strong self-regulating marketplace, post-enlightenment civilization that contains many "enlightened" "ultraintelligent machines" that all decentralize power from one another and their sub-systems.
I'm interested to find out if those machines will have memorized "Human Action" or whether they will simply directly appeal to massive data sets, gleaned directly from nature. (Or, more likely, both.)
One aspect of the problem now is that the government encourages a lot of people who should not go to college to go to college, skewing the numbers against the value of legitimate education. Some people have college degrees that mean nothing, a few people have college degrees that are worth every penny. Also, the licensed practice of medicine is a perverse shadow of "jumping through regulatory hoops" that has little or nothing to do with the pure, free-market "instantly evolving marketplaces at computation-driven innovation speeds" practice of medicine.
To form a full pattern of the incentives that govern U.S. college education, and social expectations that cause people to choose various majors, and to determine the skill levels associated with those majors, is a very complex thing. The pattern recognition skills inherent in the average human intelligence probably prohibit a very useful emergent pattern from being generated. The pattern would likely be some small sub-aspect of college education, and even then, human brains wouldn't do a very good job of seeing the dominant aspects of the pattern, and analyzing them intelligently.
I'll leave that to I.J. Good's "ultraintelligent machines." Also, I've always been far more of a fan of Hayek, but haven't read everything that both of them have written, so I am reserving final hierarchical placement judgment until then.
Bryan Caplan, Norbert Weiner, Kevin Warwick, Kevin Kelly, Peter Voss in his latest video interview, and Ray Kurzweil have important ideas that enhance the ideas of Hayek, but Hayek and Mises got things mostly right.
Great to see the quote here. Certainly, coercively-funded individuals whose bars of acceptance are very low are the dominant institutions now whose days are numbered by the rise of cheaper, better alternatives. However, if the bar is raised on what constitutes "renowned universities," Mises' statement becomes less true, but only for STEM courses, of which doctors and other licensed professionals are often not participants. Learning how to game a licensing system doesn't mean you have the best skills the market will support, and it means you're of low enough intelligence to be willing to participate in the suppression of your competition.
You certainly wrote quite a lot of ideological mish-mash to dodge the simplest possible explanation: a, if not the, primary function of elite education (as compared to non-elite education) is to filter out an arbitrary caste of individuals capable of optimizing their way through arbitrarily difficult trials and imbue that caste with elite status. The precise content of the trials doesn't really matter (hence the existence of both Yale and MIT), as long as they're sufficiently difficult to ensure that few pass.
I'm writing from an elite engineering university, and as far as I can tell, this is more-or-less our tacitly admitted pedagogical method: some students will survive the teaching process, and they will retroactively be declared superior. The question of whether we even should optimize our pedagogy to maximize the conveyance of information from professor to student plays no part whatsoever in our curriculum.
If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.
Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.
Correlation/causation? Selection effects?
Neither. Obviously, the average excellence of "doctors, masters and bachelors" of the most renowned universities is higher than the average excellence of people who are self-taught. Nobody suggests that being self-taught correlates positively with excellence.
The quotation is still undoubtedly true, because there are many more individuals who are self-taught than individuals who have these credentials. It is also plausible that the variance in excellence among the self-taught is much higher. Therefore, it is trivial to identify self-taught individuals who are more knowledgeable than most highly credentialed university graduates.
In fact, as a doctoral student in applied causal inference at a fairly renowned university, I can identify several self-taught Less Wrong community members who understand causality theory better than I do.
Whilst arguing that uncertainty is best measured using numbers and probabilities:
[missing the point]
On the contrary, combining adverbs is easy. If X is very uncertain, and Y is very uncertain, then X - Y is very, very uncertain. [/missing the point]
^_^
Why isn't it "very, very uncertain, uncertain"? Anyway, 'very' is an adjective. 'Verily' is the adverb.
But without the math to prove it, you may wrongly conclude that the uncertainties cancel out and X - Y is quite certain indeed.
– Said Achmiz, in a comment on Slate Star Codex’s post “The Cowpox of Doubt”
I'm not sure that's quite in the spirit of the thread rules, what with how closely tied Slate Star Codex is to the LW community. But it's a good enough abuse of Solzhenitsyn that I'm upvoting it anyway.
Am I the only one who finds it annoying how the "do not quote LW rule" has been creeping into ever broader interpretations?
Hmm. It's an interesting point.
I'm not entirely clear on the purpose of the rule. It makes sense to not just increase the redundancy of anything people have said in other threads that have already got a lot of attention, but I'm sure there's plenty of interesting stuff buried deep in comment threads that haven't got much light and might be worth sharing. Conversely, there will be some quotes here from outside LW/OB that a high proportion of readers have seen already.
So it's definitely something that made sense when the LW/OB community was smaller and there wasn't much good stuff that people weren't seeing anyway, but perhaps it's time to relax the rule a little bit, replace it with the substance.
I believe the purpose was to bring material to LW from outside rather than quoting each other (and especially, quoting Eliezer), to avoid an echo chamber effect. There was once an experimental LW Quotes Thread, but the experiment has not been repeated.
I don't have a strong view about whether LW regulars posting on other LW regulars' blogs should be excluded from the quotes threads, but I incline against the practice. It was a good quote though.
Which side do you incline against?
Against having such quotes.
I can't comment on the size (so LW is growing?), but I have a tingling memory that long time ago (several years back) people did post LW quotes. Since LW doesn't exist that long I suppose it was the case in its inception. I can't say for sure, but actually Eugine's post seems to suggest that as well; otherwise it wouldn't have been "creeping into". Either way, should be easy to check. I do, too, think it is worthwhile to put LW quotes. I remember (I do!) reading those and being led to read the original articles whence they came.
There was a separate thread for that for a while.
I don't think LW/OB quotes were ever allowed, but MoR quotes used to be.
I think we may have cracked down on Hanson quotes too.
The original quotation on LW.
Yuval Levin in the National Review
To the extent that we can overcome our current limits, we have to understand them first. We should beware false humility and rationalization of existing limits (e.g. deathism).
Eight Ways to Build Collaborative Teams by Lynda Gratton and Tamara J. Erickson
This seems applicable as the LessWrong community is "large, virtual, diverse, and composed of highly educated specialists" and the community wants to solve challenging projects.
Is the paper worth reading in that it offers solutions to this problem?
These are the key points from page 7:
I have seen failure at this to lead to a decline in participation esp. by key contributors who didn't see their effort honored or supported.
For LW this might mean key contributors supporting the creation or operation of benefits like the new business networking and user page initiaitive or in general the operation of the site.
On LW the active members already act as role models.
I can only guess that that is what CFAR does.
Building real-life relationships is done by meetups. I see the meetup resources as an effort to support this. But maybe someone could actively contact the meetup organizers and look whether there is potential for improvement.
I felt this at the Berlin event.
I can't quickly evaulate this. Ideas?
This follows from LW being a community and no business.
There was a post and discussion on roles but I can't find it. Maybe this needs more structure.
-- many different people, most recently user chipaca on HN
It occurs to me that "being wrong" can be divided into two subcategories -- before and after you start seeing evidence or arguments which undermine your position.
With practice, the feeling of being right and seeing confirming information can be distinguished from the feeling of being wrong and seeing undermining information. Unfortunately, the latter feeling is very uncomfortable and it is always tempting look for ways to lessen it.
Hmm, what about such things as feeling that you need to defend the truth from criticism rather than find a way to explain it better? Or nagging doubts that you're ignoring, or a feeling that your opponents are acting the way they are because they're stupid or evil? Or wanting to censor someone else's speech? I take all these things as alarm signals.
A communist friend of mine once said, after I'd nailed her into a corner in a political argument about appropriate rates of pay during a fireman's strike, "Well under socialism there wouldn't be as many fires.". I reckon that there must be a feeling associated with that sort of thing.
Defending the truth from criticism also feels exactly the same as defending what you wrongly think is the truth from criticism.
The feelings you list correspond to very common ways people behave. So they're very weak evidence that you're wrong about something. Unless you're a trained rationalist who very rarely has these feelings / behaviors.
Most people first acquire a belief - whether by epistomologically legitimate ways or not - and then proceed to defend it, ignore contrary evidence and feel opponents to be stupid, because that's just the way most people deal with beliefs that are important to them.
This is the most forceful version I've seen (assumed it had been posted before, discovered it probably hasn't, won't start a new thread since it's too similar):
Kathryn Schulz, Being Wrong
But I'm not comfortable endorsing either of these quotes without a comment.
chipaca's quote (and friends) suggest to me that
Schulz's quote (and book) suggest to me that
I'd prefer to emphasize that "You are already in trouble when you feel like you’re still on solid ground," or said another way:
Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
Schulz hasn't been quoted here before, but you might've seen my use of that quote on http://www.gwern.net/Mistakes to which I will add a quote of Wittgenstein making the same quote but much more compressed and concisely:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Hans Moravec, Wikipedia/Moravec's Paradox
The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. The mental abilities of a four-year-old that we take for granted – recognizing a face, lifting a pencil, walking across a room, answering a question – in fact solve some of the hardest engineering problems ever conceived... As the new generation of intelligent devices appears, it will be the stock analysts and petrochemical engineers and parole board members who are in danger of being replaced by machines. The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Stephen Pinker, Wikipedia/Moravec's Paradox
What was the ratio of phone time spent talking to human vs computer receptionists when Pinker published this quote in 2007? For that matter, how much non-phone time was being spent using a website to perform a transaction that would have previously required interaction with a human receptionist?
Pinker understood AI correctly (it's still way too hard to handle arbitrary interactions with customers), yet he failed to predict the present, much less the future, because he misunderstood the economics. Most interactions with customers are very non-arbitrary. If 10% need human intervention, then you put a human in the loop after the other 90% have been taken care of by much-cheaper software.
If you were to say "a machine can't do everything a horse can do", you'd be right, even today, but that isn't a refutation of the effect of automation on the economic prospects of equine labor.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.
Let's hope that we're not still paying rent then, or we might find ourselves homeless.
Raising Steam, Terry Pratchett
Regarding the first steam engine in Pratchett's fictional world.
Relevant is the Amtal Rule on this same page: http://lesswrong.com/r/lesswrong/lw/jzn/rationality_quotes_april_2014/as28
This quote made me read the book, and I wasn't disappointed.
The overall arc of the Discworld is stunning; in retrospect, it recapitulates the rise of civilization well. Raising Steam is perhaps not the last book, but it wouldn't be a bad place to stop if it were.
Terry Coxon
All I'm getting out of this is that the quoted fails to understand the ability of great minds. Is there a context I'm missing?
Being ready for failure is not quite the same thing as considering success impossible.
The context is that economics is in shall we say an earlier stage of development than engineering, so we should be more conscious of the risk of economic tinkering failing than we need be of whether our bridge or plane falls apart underneath us.
"Did many people die?"
"Three thousand four hundred and ninety-two."
"A small proportion."
"It is always one hundred percent for the individual concerned."
"Still..."
"No, no still."
-Ian Banks, Look to Windward
Does this quote have any rationalist content beyond the usual anti-deathism applause light?
And here I looked at that and saw a negative example of how not to do "shut up and multiply", though I suppose it could also be a warning about scope insensitivity / psychophysical numbing if the risk at hand required an absolute payment to stave off, rather than a per-capita payment, since in the former case only absolute numbers matter, and in the latter case per capita risks matter.
Maybe I need to include more context. This conversation occurs after the multiplication was done. This was discussing the aftermath, which had been minimized as much as the minds in question could manage. I took it to mean that, once you have made the best decision you can, there is no guarantee that you will be happy with the outcome, just that it would likely have been worse had you made any other decision.
I think the inability to include that context and make your interpretation clear means that it's a bad rationality quote because it's far too easily taken a 'consequentialism boo!' quote.
1930 Lev Vygotsky in Mind and Society (transcribed by Andy Blunden and Nate Schmolze)
Online: http://www.cles.mlc.edu.tw/~cerntcu/099-curriculum/Edu_Psy/EP_03_New.pdf
HitaRQ? There have been many theories of child development. What singles this one out as noteworthy?
Because it is a key insight (stated in 1930) into the development of practical intelligence, i.e. intelligence applicable to general and real life problems, which the AI community has arrived at only in the late 1980s
http://en.wikipedia.org/wiki/Embodied_cognition#History_of_AI
This is a great tagline for the doctrine of Original Sin.
"Even if it's not your fault, it's your punishment."
Clifford Truesdell
I don't see why an equation can't be nonsensical. Perhaps the nonsense is easier to spot when expressed in symbols, or then again perhaps not.
Equations can be nonsensical, but it's harder to write a nonsense equation than a nonsense sentence (like the old joke: it's easy to lie with statistics, but it's easier to lie without them). In a way this was the unpleasant surprise of Godel's incompleteness theorem; before that we'd hoped that every well-formed proposition was true or false and could be proven to be so.
This is beautiful: I can't turn it into equations. Does that refute it or support it?
There are symbol-juxtapositions which are syntactically or semantically disconnected from any model set in ZFC. There are no sets in ZFC which are similarly separated from statements in a suitable language.
This looks like the sort of thing that I usually find enlightening, but I don't understand it. Could you repeat it in baby-speak?
You can write nonsense formulas on paper which don't correspond to theorems about anything. You can't construct nonsense universes which aren't described by theorems anywhere.
Words only mean anything because we interpret them to correspond to the real world. In the absence of words, the real world continues existing.
Did you try? Each sentence in the quote could easily be expressed in some formal system like predicate calculus or something.
On thrust work, drag work, and why creative work is perpetually frustrating --
"Each individual creative episode is unsustainable by its very nature. As a given episode accelerates, surpassing the sustainable long term trajectory, the thrust engine overwhelms the available supporting capabilities. ... Just as momentum build to truly exciting levels…some new limitation appears squelching that momentum. ...The problem is that you outran your supporting capabilities and that deficit became a source of drag. Perhaps you didn’t have systems in place to capture leads. Perhaps you lacked the bandwidth necessary to follow up on all the new opportunities. Perhaps, due to lack of experience, you pursued the wrong opportunities. Perhaps you just didn’t know what to do next – you outran your existing knowledge base. In one way or another new varieties of drag emerge. The accelerating curve you had been riding becomes unsustainable and you find yourself mired in the slow build of the next episode. This is what we experience as anti-climax and temporary stagnation." -- Greg Raider, from his essay "A Pilgrimage Through Stagnation and Acceleration"
The whole piece is worth reading, it's really good -- http://onthespiral.com/pilgrimage-through-stagnation-acceleration
Hat tip to Zach Obront for linking me to it originally.
Donald Knuth on the difference between theory and practice.
Duplicate.
-- Daniel Dennett, Intuition Pumps and Other Tools for Thinking
Are we sure about this? Einstein's idea of riding along with a light beam was super-useful and physically impossible in principle. Whereas the experiment I just thought of where I pour my cup of tea on my trousers I can almost not be bothered to do.
This is funny. Until I read your comment, I was misreading the original quote; I didn't notice the "inversely" part. I was implicitly thinking that the quote was claiming that the farther the thought experiment is from reality, the more useful it is. I guess my physicist biases are showing.
I think that's my point! It sounds just as profound without the 'inversely'.
Ceteris paribus, then. On average, a thought experiment along the lines of "what if I poured this stuff on my trousers" is of much more practical use and tells you much more about reality than a thought experiment along the lines of "what if I could ride around on [intangible thing]". The most realistic thought experiments are the ones we do all the time, often without thinking, and which help us decide, for example, not to balance that cup of tea right on the edge of the table. Meanwhile, only very clever scientists and philosophers with lots of training can wring anything useful out of really far-out "what if I rode on a beam of light"-type thought experiments, and even they screw it up all the time and are generally well-advised not to base a conclusion solely on such a thought experiment. As I understand it, Einstein's successful use of gedankenexperiments to come up with good new ideas is generally considered evidence of his exceptional cleverness.
(note: I know very little about this topic and may be playing very fast and loose. I think the main idea is sensible, though)
I assume that the reader is familiar with the idea of extrasensory perception, and the meaning of the four items of it, viz., telepathy, clairvoyance, precognition and psychokinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming.
Alan Turing (from "Computing Machinery and Intelligence")
Particularly relevant a quote given Yvain's recent http://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/
That is an exceedingly interesting article. Thanks for the link.
Can you provide some context? I don't understand: the claim that the evidence for telepathy is very strong is surely wrong, so is this sarcasm? A wordplay?
Turing's 1950 paper asks, "Can machines think?"
After introducing the Turing Test as a possible way to answer the question (in, he expects, the positive), he presents nine possible objections, and explains why he thinks each either doesn't apply or can be worked around. These objections deal with such topics as souls, Gödel's theorem, consciousness, and so on. Psychic powers are the last of these possible objections: if an interrogator can read the mind of a human, they can identify a human; if they can psychokinetically control the output of a computer, they can manipulate it.
From the context, it does seem that Turing gives some credence to the existence of psychic powers. This doesn't seem all that surprising for a British government mathematician in 1950. This was the era after the Rhines' apparently positive telepathy research — and well before major organized debunking of parapsychology as a pseudoscience (which started in the '70s with Randi and CSICOP). Governments including the US, UK, and USSR were putting actual money into ESP research.
Yes, but also remember that Turing's English, shy, and from King's College, home of a certain archness and dry wit. I think he's taking the piss, but the very ambiguity of it was why it appealed as a rationality quote. He's facing the evidence squarely, declaring his biases, taking the objection seriously, and yet there's still a profound feeling that he's defying the data. Or maybe not. Maybe I just read it that way because I don't buy telepathy.
I think Turing's willingness to take all comers seriously is something to emulate.
Hodges claims that Turing at least had some interest in telepathy and prophesies:
Alan Turing: The Enigma (Chapter 7)
Eadem Mutata Resurgo
[the] Same, [but] Changed, I [shall] Rise
On the tombstone of Jacob Bernoulli.
Some context may be useful. (Sadly, the people who made the tombstone screwed up[1] and put the wrong sort of spiral on it.)
[1] I suppose this is a rather clever pun, but only by coincidence.
Voted up for the pun! I liked it for the cryonics reference. Like in Lovecraft.
Jerry Spinelli, Stargirl
So as to keep the quote on its own, my commentary:
This passage (read at around age 10) may have been my first exposure to an EA mindset, and I think that "things you don't value much anymore can still provide great utility for other people" is a powerful lesson in general.
Cracked
Finding out that you're stupid (or ignorant) is an important start. I don't recommend insulting people because they're started rather than continued the job, especially if they're young.
I don't see how that's any different from all the other age groups ;-).
We are out of it, so we can bitch about ;-).
Being able to patronise the young is the only advantage of age
Failing health is the only disadvantage of age. In every other way, the years just make things better.
Other people and governments knowing about it and changing how rules and expectations apply are pretty darn big disadvantages for both young, old, and in between, in different situations and ways.
This is too abstract for me to have any idea what you're talking about.
-- Max Tegmark, Scientific American guest blog, 2014-02-04
Well... Einstein didn't need a complete theory of quantum electrodynamics to predict the coefficients of spontaneous emission from thermodynamical arguments; I don't think Bekenstein and Hawking need a complete theory of quantum gravity to make predictions other than those of classical GR either.
I would think the first objection to that line of reasoning would be that we know General Relativity is an incomplete theory of reality and expect to find something that supersedes it and gives better answers regarding black holes.
Better answers, yes, but I'd expect the new answers to be at least quite like the GR answers. I mean, probably no singularities in the real theory, but lots of time-warping and space-whirling, surely. He only says 'take seriously', not 'swallow whole including the self-contradictory bits'.
Douglas Adams, Hitchhiker's Guide to the Galaxy
Thanks for this one.. It's been some time since I re-read Douglas Adams , and had forgotten how good he can be. It makes so much sense reading this right after reading "Bind yourself to Reality". Had good long guffaw out of this one .:-)
"It is one thing for you to say, ‘Let the world burn.' It is another to say, ‘Let Molly burn.' The difference is all in the name."
-- Uriel, Ghost Story, Jim Butcher
I love the character of Uriel in the Dresden Files. I find his interpretation of the Fallen very interesting also.
-- Henry Hazlitt, Economics in One Lesson
And it seems to be going pretty well!
Ah, but you have not seen the counterfactual.
It is, in fact, a very good rule to be especially suspicious of work that says what you want to hear, precisely because the will to believe is a natural human tendency that must be fought.
- Paul Krugman
(Edited to add context)
Context: The speakers work for a railroad. An important customer has just fired them in favor of a competitor, the Phoenix-Durango Railroad.
It gets at the idea talked about here sometimes that reality has no obligation to give you tests you can pass; sometimes you just fail and that's it.
ETA: On reflection, what I think the quote really gets at is that Taggart cannot understand that his terminal goals may be only someone else's instrumental goals, that other people are not extensions of himself. Taggart's terminal goal is to run as many trains as possible. If he can help a customer, then the customer is happy to have Taggart carry his freight, and Taggart's terminal goal aligns with the customer's instrumental goal. But the customer's terminal goal is not to give Taggart Inc. business, but just to get his freight shipped. If the customer can find a better alternative, like competing railroad, he'll switch. For Taggart, of course, that is not a better alternative at all, hence his anger and confusion.
(Apologies for lack of context initially).
Without context, it's a bit difficult to see how this is a rationality quote. Not everyone here has read Atlas Shrugged...
I've read AS a while ago, and I still don't remember enough of the context to interpret this quote...
-- Richard Fumerton, Epistemology
"Go work in AI for a while, then come back and write a book on epistemology," he thought.
Upon reading this, he wanted to map out the argumentative space in his head and decided to try to draw a line at one end, saying "Lets not get nuts. Mercury thermometers can react differentially to temperature, but they don't know how hot it is."
[citation needed]
Really? So, say, if I put a bone on the other side of the river, the dog doesn't know that it can swim across?
How would one tell?
First, you offer them a sequence of bets such that...oh wait.
Do dogs not know that bones are nice?
-- Meta --
Shouldn't this be in Main rather than Discussion? I PM'ed the author, but didn't get a response.
EDIT: Thanks.
-Daniel Dennett, Intuition Pumps and Other Tools for Thinking, Chapter 18 "The Intentional Stance" [Bold is original]
Reminded me of the idea of 'hacking away at the edges'.
As far as I understand, he actually does define his terms. Dennett defines a mind as a rational agent/decision algorithm (subject to evolutionary baggage and bugs in the algorithm). Please correct me if I'm wrong.
At this point in the book, he certainly hasn't reached that conclusion. He's merely given parameters under which taking the Intentional Stance is a good idea; when it's useful to treat something as having a mind, beliefs, desires, etc. This, he says, will be a useful stepping stone to figuring out what minds and beliefs and desires really are, and how to know where they exist in this world.
The mathematician and Fields medalist Vladimir Voevodsky on using automated proof assistants in mathematics:
[...]
[...]
[...]
[...]
From a March 26, 2014 talk. Slides available here.
Computer scientists seem much more ready to adopt the language of homotopy type theory than homotopy theorists at the moment. It should be noted that there are many competing new languages for expressing the insights garnered by infinity groupoids. Though Voevodsky's language is the only one that has any connection to computers, the competing language of quasi-categories is more popular.
I know you're not supposed to quote yourself, but I came up with a cool saying about this a while back and I just want to share it.
Computer proof verification is like taking off and nuking the whole site from orbit: it's the only way to be sure.
A video of the whole talk is available here.
And his textbook on the new univalent foundations of mathematics in homotopy type theory is here.
It is misleading to attribute that book solely to Voevodsky.
-- Alfred Adler
ADDED: Source: http://en.wikiquote.org/wiki/Alfred_Adler
Quoted in: Phyllis Bottome, Alfred Adler: Apostle of Freedom (1939), ch. 5
Problems of Neurosis: A Book of Case Histories (1929)
Comedian Simon Munnery:
from The Last Samurai by Helen DeWitt
Nassim Taleb
By that standard, all academic disciplines are BS disciplines.
I believe that is the intended meaning, yes.
Can't be. You can't draw a distinction within a category by separating it into two subcategories one of which is empty.