Rationality Quotes November 2014
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (337)
Clive Crook on Bloomberg View
Napoleon Bonaparte, from Napoleon: In His Own Words (1916).
Technically true, although Mao managed to get remarkably close.
Tucker Max
Everything Is Problematic, an account of getting out of radical left wing politics.
I don't buy it. We have many existing laws and spending programs that make us worse off than not having them (or, equivalently, leaving it up to the market rather than the taxpayers to provide them). The free market is known to work well enough, and broadly enough, that demanding "What would you replace it with?" when someone proposes ending one of those laws or programs is un-called-for. (If anyone really does doubt that the market will do better, the thing to do is to try it and see, not to demand proof that can't exist because the change in question hasn't been tried recently.) After a few repetitions, I simply lump the asker in with the kind of troll whose reply to every comment is "Cite?" and add him to my spam filter.
An explicit argument that lack of regulation would produce better results than the current regulatory system is not the same thing as disliking and actively opposing the current system yet having no idea what to replace it with.
This strikes me as a common failing of rationality. Personally I've never really noticed it in politics though. People arguing politics from all corners of the spectrum usually know exactly what they want to happen instead, and will advocate for it in great detail.
However, in science it is extremely common for known broken theories to be espoused and taught because there's nothing (yet) better. There are many examples from the late 19th/early 20th centuries before quantum mechanics was figured out. For example, the prevailing theory of how the sun worked used a model of gravitational contraction that simply could not have powered the sun for anything like the known age of the earth. That model wasn't really discarded until the 1920s and 30s when Gamow and Teller figured out the nuclear reactions that really did power the sun.
There are many examples today, in many fields, where the existing model simply cannot be accurate. Yet until a better model comes along scientists are loath to discard it.
This irrationality, this unwillingness to listen to someone who says "This idea is wrong" unless they can also say "and this alternative idea is right" is a major theme of Thomas Kuhn's The Structure of Scientific Revolutions.
I've asked SJs whether there was ever a time in their lives when they thought they were in a group that was satisfyingly inclusive, whether there was some experience they were trying to make more common. Admittedly, I only asked a few people (and with tact set on maximum). The only answer I got was no.
It's possible I was overgeneralizing in several ways, but I was asking because it seemed to me that what I'd read of anti-racism had a tone of "something hurts, it's urgent to stop the pain", but there was no positive vision.
This might have something to do with political (and maybe even choices inside businesses) which actually make life better vs. those that don't. There's always some sort of vision, but maybe there are issues related not just to whether pieces of the vision are accurate, but whether it's clear enough in appropriate ways. For example, was part of the problem with centralized economies that no one had a clear idea of how information would get transmitted? (This is a real question.)
That someone has never experienced some state X does not imply that they do not have a vision for the state X they wish to achieve in the future. If you want to know what someone's positive vision for the future is, ask them, "What is your vision for a better future?"; not "Have you experienced something better than this in the past?" These are two very different questions.
Most people grow up in some status quo.* That doesn't mean they can conceive of no alternative to that status quo.
SJs? Can you elaborate? I'm not sure what you're referring to.
I think in this context it refers to people who advocate for social justice.
Yes, that's it-- I think SJs is more polite than SJWs (Social Justice Warriors), but I'm guessing about that.
It's a rather confused area of terminology-- there's an older use of "social justice" (note lack of capitalization) which, so far as I know, consisted of advocating for various groups, but didn't include the ideas of privilege and calling out.
What do the people that people call SJWs call themselves?
Feminists, antiracists etc. Often something like intersectional something or other. They don't have a name that most of them are happy with, which is why a name that was just a joke about them 'fighting for social justice' stuck.
There is a lot of terms involved.
A person might say: I'm a third wave feminist. The also might say: I'm an ally.
Generally, progressives.
SJWs to progressives are like crusaders to Christians.
The trouble with "SJs" is that it looks like an abbreviation but there doesn't seem to be anything it stands for. "Social Justices"? (That would mean judges who like to party, I guess.) "Social Justicers"?
Maybe something longer is needed. "SJ people"? "SJ folks"? "The online Social Justice movement"?
Given that no revolution ever produced the system that the people who threw the revolution planned to introduce I don't think that it's an easy case to argue that you need to have a specific plan.
Waterfall is no good design paradigma.
No plan survives contact with the enemy (or reality), but that doesn't mean you can just wing it. Of course you need a specific plan, but you also need the ability to change that plan as needed, in a controlled and sensible way. Realising the problems of advanced planning means you need to spend more time, not less, on working out what you are trying to do.
I'm reminded of Eisenhower:
-- From a speech to the National Defense Executive Reserve Conference in Washington, D.C. (November 14, 1957) ; in Public Papers of the Presidents of the United States, Dwight D. Eisenhower, 1957, National Archives and Records Service, Government Printing Office, p. 818 : ISBN 0160588510, 9780160588518
Then why does every modern startup do agile development instead of spending more time on planning?
Well, probably mostly because it's trendy. But as for why people who choose to do agile development for sensible reasons do so, I suspect it's because doing planning and data collection in such a way that they inform one another has better results than planning in the absence of data or data collection in the absence of a plan.
Why do you ask?
If you ask people to give you a clear alternative of a poltiical system then the only way to give you what you are asking is to give you something that migth work in theory but that's not based on empiric reality.
One of the big problem with Soviet style communism was that a central planner made a plan with wasn't well based on empiric reality.
As a result there are valid reasons for part of todays left to dislike the idea of central planning.
We do agile development where I work. That doesn't mean we don't plan. On the contrary. Agile development doesn't mean throwing a bunch of developers in a room and telling them "do whatever comes to mind" without any thought to what might come out of the process. It means constantly updating your plans, in an adaptive and iterative way.
This is trivially true if you mean that no revolution produced the desired result up to the end of time. But then, the same is true of anything any human being does.
If you interpret it in a narrow, nontrivial way such as "no revolution produced a result that was close to the desired result and took at least as long enough to become unrecognizeable as the existing order would have taken to become unrecognizeable", then there are several candidates, including the American Revolution and several post-Soviet states (if you count leaving the USSR as a revolution).
I'm not saying "result" but system. The US constitution got written after the US got independent and not before.
Some countries of the USSR did copy the Western style of democracy and free markets. They could do that by letting other countries send people to tell them how to run their country. They didn't do that because they themselves knew how to create a democratic state with free markets.
If my project is to lock my apartment with my key, then I can be quite certain that the result with look roughly like I plan beforehand. The bigger the project the harder it is to plan everything beforehand.
As a result big software projects get these days not fully planned in advance via waterfall but get created in an agile way. Creating a substantial new political system as opposed to just copy some existing one, is much more complex than a software project and therefore even less doable via waterfall.
Perhaps a more precise point is that the first American government failed. John Hanson and the other 9 Presidents of the United States under the articles of confederation were operating the true government they threw the revolution for. It failed almost immediately -- you would be astonished at how hard it was to convince someone to run the country, hence the extremely high turnover on Presidents.
I, and many other people here on Less Wrong, live in a massive, surprisingly enduring Plan B of a government.
[It's worth pointing out I like this one better, because we can find appropriately qualified staff, which is, ya know, pretty good. But alas, I was not a father of the American Revolution.]
They wanted to create a government which was democratic, at least to a certain extent. They had a revolution. And they got one. It's true that some of the exact details weren't written down until after the Revolution, but they didn't have a revolution and then get a dictatorship, or something unsustainable, or find that all private property was abolished two years later--they got something which was clearly within the parameters they were trying to achieve.
That's taking a very narrow interpretation of "planned to introduce". If you had asked them "when you overthrow the Communists, do you plan to have a free market system", they would have said yes. I count that as "planning to introduce a free market system, and getting what they planned for".
The point of that sentence was to rule out saying "But if you look at the government over 200 years later, they clearly wouldn't have anticipated high tax rates and gay marriage, so they didn't get the system they wanted". If the system produced by the revolution is at least as stable as a non-revolutionary system, even if it has enough instability to show up after 200 years, it should count.
I think quite a few people on the left can tell you a few catch phrases about how their alternative system should look like that are as vague as demoractic.
I don't particularly agree with this quote, but the link it comes out of is excellent.
It's amazing!
Holographic Model of The Universe
I first met this quote in a talk about quantum physics. Funny that it seems to come from an esoteric book. Crisis of faith, a drug withdrawal?
If you can't get people to take something seriously, sometimes it's because it's plainly wrong. The concept of "addicted to their beliefs" relieves you of having to listen to them. "Addiction" is no more an explanation of anything than "emergence".
This is in a context of wondering why "Western science" (an absurd concept) "has devoted several centuries to not believing in the paranormal." I shall resist the tu quoque against the author and just say that I think that book is made of wrong.
The YouTube link is to a German-language presentation. I have only a fragment of German, but with Google Translate I gather that the speaker was (d.2011) a management trainer and motivational speaker. Not good qualifications for talking about quantum physics. "'Alles ist mit allem verbunden' ...ja ja und die Erde ist eine Scheibe," as the first comment says.
You are arguing against a strawman. Saying someone acts like an addict is not the same thing as saying that he is an addict. It's especially not an explanation.
She has probably more formal qualifications than Eliezer and is more skeptic about her knowledge about quantum physics than Eliezer.
That is the point I was intending. The author of that book seems to use "addiction" as an explanation. "Why do these people not pay me any attention?" he asks himself. "I know, it's because they're addicted to their beliefs!"
Does she have knowledge to be sceptical about? I'm not going to slog through two hours of video, even if it were in English. Her works listed at de.wikipedia.org are on other subjects. No, there is nothing here that suggests to me that looking further into it would be useful.
If you read that article you will find that Spektrum (the German version of the Scientific American) wrote her a well meaning obituary.
I wonder why. Via Google Translate, the obituary says only:
(I think "Autorin der ersten Stunde" actually means "founding author" in this context. G & G is "Brain and Mind", a section of the magazine. ETA: Which is a perfectly good reason for giving her an obituary,)
It then has links to a few of her articles, but the ones I sampled were on topics in training and personal development, sprinkled with neuroscience. No QM.
The related skill is communicating science to a broad public in a way the public understands. That's what she did at Spektrum and what she does in that video. The room in which she's holding that lecture is a proper university hall at the Technische Universität München.
The lecture doesn't say something that would damage her reputation among academics.
Your stereotypical patterns don't work well in her case.
I'm not accusing Birkenbihl of peddling woo. The original comment posted by Roho does come from a book of woo, and Roho associated her name with the idea.
As I say, I'm not going to search two hours of video in a language I hardly know to find out what Birkenbihl said on the subject; so I do not know if Roho's attribution to Birkenbihl is accurate. I can imagine something of the sort being said in a popular exposition of the reception of quantum mechanics. But whether she said anything like it or not, the idea expressed in the quote is a poor one, especially so in the context it was quoted from.
Well, I don't want to argue about this too much, so just to clarify:
Birkenbihl quoted Bernie Siegel with "If you want to change somebody's beliefs, he acts like an addict.", in the context of the famous Max Planck quote that new scientific ideas prevail not because they are accepted, but because those who oppose them die out. In this context, I found the idea interesting, therefore I placed the quote here.
She did not mention that esoteric book. But I searched for the quote in order to provide a source, found it in that book, was mildly amused by it, but thought too little about it.
As it reads in the book, Bernie Siegel sounds somewhat sulky, too, that people do not accept his ideas about medicine. Me, I have no idea what they are. But in this context, the quote is indeed rather unhelpful (to put it politely).
The talk about quantum physics was OK, although nothing to write home about. She happily declares that she knows next to nothing about it, then claims that nobody understands it, which is of course wrong. She did not mention some very important concepts (decoherence, Feynman paths). At least, there was "many worlds" and no "wave function collapse", which is not so bad for a talk from the 1990s.
Yes, and right after that he goes on:
...and seems not to notice that he himself has never questioned the beliefs with which he grew up?
The talk about quantum mechanics was nice for non-mathy laymen, although it barely scratches the surface. After reading the quantum physics sequence here, I sometimes like to try out stuff like this and compare them to it.
I would not try to use "addiction" as an explanation. I just liked the comparison between trying to get somebody to change a long-held belief and trying to get him to stop smoking.
--Seneca, on judging actions by expected value.
--Marcus Aurelius
Funny, I upvoted your other quote and downvoted this one.
Anyhow, if the object of your affections surprises you with a cuddle and your well being is unimpacted, I wouldn't call that sane.
I think "tying it to" should be read in the sense of an anchor, not in the sense of "is impacted by."
James Hamilton on how Amazon speeds up AWS networking by only implementing part of required networking tasks.
(emphasis mine)
Is there anything like a general theory of satisficing that tells you when it's a good idea? It's reasonably easy to decide in individual cases provided you've got a lot of specific quantitative information, but suppose you don't have a lot of specific information and you only know qualitative facts about how something works within a system.
If not, should people just default to satisficing unless there are obvious reasons it will fall short of optimality, or should they do the opposite? I'm inclined to favor the former, but am interested to hear other people's perspectives.
The latest in a continuing series on immediate Bayesian updating in response to information. (Also viable as an example of an "unknown known," since he knew the counter-example but had not thought to apply it.)
But but fictional evidence!
Excellent point, but his prior was even weaker.
Presenter: [Snipping 75 minutes of reading without eye contact.] "...so as you can see, I have reconceptualized and reconsidered and -icized and -atized until this problem I talk about is clearly both like and unlike what Hobbes, Locke, Rousseau, Plato, and Arendt implied by choosing one word instead of a universe of other words in these few sentences no one else has really talked much about."
Theory Search Committee Member: "Well, certainly, but since we have clear answers about this philosophical problem deriving from Augustine's flirtation with manichaeism [snipping 15 minutes of bibliographic citations] ... what could we turn to in order to understand why what you have presented improves our understanding of the problem at hand?"
Audience Member In the Back: "Data."*
*This totally happened.
source
I'm not really getting anything from this other than "Mainstream philosophy, boo! Empiricism, yeah!"
Is there anything more to this post?
If you read the comment thread on the source, you see that it isn't actually philosophy boo, empiricism yeah, but rather an internecine conflict within academic political science.
Honestly, I did read the source, and it's very difficult to get anything useful out of it. The closest I could interpret it is "Theory (In what? Political Science?) had become removed from "Other fields" (In political science? Science?)".
In general, if context is needed to interpret the quote (I.E. It doesn't stand on it's own), it's good to mention that context in the post, rather than just linking to a source and expecting people to follow a comment thread to understand it.
Sorry if this is overly critical, that was not my intention. I just don't get what the "internecine conflict" you are referring to is.
-- Aristotle in The Nicomachean Ethics, pointing out entangled truths and contagious lies
"soon" can vary quite a bit, depending on what is false. Following the link, I'm skeptical of "From the study of that single pebble you could see the laws of physics and all they imply." Specifically, I'm skeptical that one can deduce the parts of the laws of physics that matter under extreme conditions (general relativity, physics at Plank-scale energies) by examining the behavior of matter under benchtop conditions, at achievable levels of accuracy. The motivation for building instruments like the LHC in the first place is that they allow probing parts of physical laws which would otherwise produce exceeding small effects or exceedingly rare phenomena.
The tricky part is the "achievable levels of accuracy". It would be possible for, say Galileo to invent general relativity using the orbit of mercury, probably. But from a pebble, you would need VERY precise measurements, to an absurd level.
Richard Feynman on the Challenger incident
Dupe (twice).
Ah, crap. So, how does this work, exactly? Should I remove my comment?
If you hit the 'retract' button (it's the circle with the diagonal line through it), then the post will have a strikethrough and the karma will be locked where it is now, and that's what people typically do. In the future, do a search for quotes before you post them (but keep posting quotes!).
All right, cool, thanks. (I did actually search through the site to see if there were any repeats, but I guess I wasn't thorough enough in my search!)
Teacher: So if you could live to be any age you like, what would it be?
Boy 2: Infinity.
Teacher: Infinity, you would live for ever? Why would you like to live for ever?
Boy 2: Because you just know a lot of people and make lots of new friends because you could travel to lots of countries and everything and meet loads of new animals and everything.
--Until (documentary)
http://mosaicscience.com/extra/until-transcript
From the same source:
:|
hate to break it to you, kid...
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of "rationality".
To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a 'rationality' issue.
Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences like those of Boy 2. If Boy 2 was saying (or implying) that Boy 1 should want to live to infinity then there would be a problem.
[in the context of creatively solving a programming problem]
"You will be wrong. You're going to think of better ideas. ... The facts change. ... When the facts change, do not dig in. Do it over again. See if your answer is still valid in light of the new requirements, the new facts. And if it isn't, change your mind, and don't apologize."
-- Rich Hickey
(note that, in context, he tries to differentiate between reasoning with incomplete information, which you don't need to apologize for - just change your mind and move on - and genuine mistakes or errors)
Nyan Sandwich
The truth of a preposition may not be a function of time or place, but which preposition a natural-language sentence states is.
I don't think I understand this quote. What is a "temporally contingent worldview"? It can't simply mean any worldview that wasn't widely held in the past, because that would mean distancing oneself from pretty much all science. What, then?
Also, while I agree that the truth of a statement (that doesn't include spatio-temporal indexicals, either implicit or explicit) is not a function of time or place, widespread knowledge of a true statement usually varies with time and place. Not saying this justifies adoption of a temporally contingent worldview (since I'm not sure what is), but there does seem to be a bit of a non-sequitur in that quote.
Without doing much in the way of research, which would spoil the game, I think the quote is urging people not to privilege the beliefs of the culture they live in. For example, many popular beliefs of the 1900's are clearly incorrect when viewed in hindsight; the logical conclusion is that, in a hundred years, many popular beliefs today will be seen as clearly incorrect by those future generations.
I can think of a few likely candidates off the top of my head. And sorry for the sesquipedalian loquaciousness. I keep trying to stop, but I can't!
Arthur Schopenhauer, letter to Johann Goethe, 1819.
If I may attempt to summarize:
"I remember reading of a competition for a paper on resolution of singularities of surface; Castelnuovo and Enriques were in the committee. Beppo Levi presented his famous paper on the resolution of singularities for surfaces.
Enriques asked him for a couple of examples and was convinced; Castelnuovo was not. The discussion got heated. Enriques exclaimed 'I am ready to cut off my head if this does not work', and Castelnuovo replied 'I don't think that would prove it either.'"
-- Angelo Vistoli, mathoverflow
Of course, if bad proofs lead to heads being cut off, then there would probably be fewer bad proofs. (I take it the point here is not that Castelnuovo had any doubts about whether Enriques was being honest about believing the result or had come to his belief on flimsy grounds (which is usually not something one can take for granted...), but that he understood this and was interested in finding an explicit formal proof of the result.)
I think adding the author name in addition to "mathoverflow" would make sense.
Baron, Thinking and Deciding
One of the major symptoms of my stroke was seriously truncated working memory, and I spent months both training it back and learning to work around the limitations of it.
So i agree that there are strategies that can overcome the limits of working memory,though I wouldn't describe them as "thinking more"... it was more like saving state externally on a regular basis, and developing useful habits of interacting with that saved state. More generally, it's not a question of doing the same thing for longer, it's a question of doing different things that end up taking longer. It's "thinking differently, for longer."
That being said, though, I cannot begin to describe how much smarter I felt (and seemed) when the damage began to heal and I could start doing stuff in my head again.
It's a bit unclear without the context, but what he means is that subjects should think more about the task and realize that they need to, e.g. use a pen and paper.
Calvin Coolidge, Inaugural Address, 1924.
-- Dirk Gently's Holistic Detective Agency, Douglas Adams
That seems like a failure of noticing confusion; some clear things are actually false.
No observation is false. Any explanation for a given observation may, with finite probability, be false; no matter how obvious and inarguable it may seem.
This is one of those things that seems like it ought to be true, if only humans weren't so human. I've seen enough attempted bug reports based on events which - upon going through the logs - never actually happened, to disabuse me of the notion. Certainly some class of "false observations" might be better thought of as "false explanations", but sometimes people are just plain wrong about what they saw.
That's trivially not true -- consider e.g. measurement error.
OK, let's consider measurement error.
I have, let's say, a weight.
It actually masses 50 g.
I put it on a scale and observe that the reading on the scale is "51g."
On your account, is my observation false?
If so, does your judgment change if it's a standard weight that I'm using to calibrate the scale?
Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.
The distinction between accuracy and precision is relevant here. I am assuming your scale is sufficiently precise.
No, it does not. I am using "false" in the sense of the map not matching the territory. A miscalibrated scale doesn't help you with that.
"This weight masses 51 grams" is not an observation, it's a theory attempting to explain an observation. It just seems so immediate, so obvious and inarguable, that it feels like an observation.
I feel this leads into a rabbit hole where everything beyond photons striking the retina becomes a "theory".
Isn't that itself a theory to explain our qualia of vision? If, for example, some versions of the simulation hypothesis were true, even photons and eyes would be a false map, though a useful one.
I think this "rabbit hole" is basically reality. In other words "there is a physical world which we see and hear etc" is a theory which is extremely well supported by our observations. Berkeley's explanation that there is no physical world, but God exists and is directly causing all of our sensations is an alternate theory, although a rather unlikely one.
What evidence lead you to this conclusion?
Yeah, this is basically why probability matters.
Hey hallucinations are totally a thing.
The confusion here has nothing to do with the meaning of "false," or the distinction between accuracy and precision.
If I'm using a known 50-g weight to calibrate a scale, and I look at the scale reading (which says "51g"), and thereby conclude that the scale is off by 1g, I don't think you're at all justified in concluding that I've observed that the weight is 51g.
I mean, I agree that if I had made such an observation, it would be a mistaken observation.
But I don't agree that I made any such observation in the first place. For example, if you asked me after weighing the weight "What is the mass of the weight?" I would most likely answer "50g," because being able to say that with confidence is the whole point of using standard-mass callibration weights in the first place.
I am confused. In your example what are you saying your observation is, and do you consider it true or false? Also, what do you consider "known" before the observation?
I observe that the reading on the scale is "51g," as I said in the first place.
Yes. True.
All kinds of things. In the case with a standard 50g callibration weight, that includes the mass of the weight.
This is getting stuck in the morass of trying to distinguish between observations and interpretations. I don't particularly want to discuss the philosophy of qualia.
My point is much simpler. It's quite common for data points which everyone calls "observations" to be false. Trying to fix that problem is called cleaning the data and can be a huge hassle. In practical terms, if you get a database of observations you cannot assume that all of them are true.
That may be, but if you label them 'impossible' and dismiss them, you won't gather more evidence to prove it. And if something you consider impossible has actually happened, you're missing an opportunity to improve your model significantly.
This is in fact what happens in-context. With a preposterously-detailed description of observable events (via magic hypnosis; I didn't say the novel made sense), Gently concludes that something has happened which could not have happened as described, and that the only explanation which would explain the results involves time travel; the other person says that it's impossible, to which Gently replies this.
Yeah, I feel like in real world situations, hypothesizing time travel when things don't make sense is not likely to be epistemically successful.
Wasn't there a proverb about generalizing from fictional evidence? Especially from fiction that intentionally doesn't make sense?
Generalization from fictional evidence
I don't think the quote is talking about "hypothesizing" anything; I read it more as "You have to update on evidence whether that evidence fits into your original model of the world or not". Instead of "hypothesizing time travel when things don't make sense", it'd be more like a stranger appears in front of you in a flash of light with futuristic-looking technology, proves that he is genetically human, and claims to be from the future. In that case it doesn't matter what your priors were for something like that happening; it already happened, and crying "Impossible!" is as illegal a move in Bayes as moving your king into check is in chess.
Not that such a thing is likely to happen, of course, but if it did happen, would you sit back and claim it didn't because it "doesn't make sense"?
Yes. And then I would go see a psychologist. Because I find it more likely that I'm losing my grip on my own sanity than that I've just witnessed time travel.
Alright, so you bring this alleged time traveler with you to visit two or three different psychologists, all of whom are appropriately surprised by the whole 'time travel' thing but agree that you seem to be perceiving and processing the facts of the situation accurately.
Furthermore you have a lot of expensive tests run on the health and functionality of your brain, and all of the results turn out within normal limits. Camera-phone videos of the initial arrival are posted to the internet and after millions of views nobody can credibly figure out how it could have been faked. To the extent that introspection provides any meaningful data, you feel fine. In short, by every available test, your sanity is either far beyond retrieval down an indistinguishably perfect fantasy hole, or completely unmarred apart from perhaps a circumstantially-normal level of existential anxiety.
Now what?
Then I accept that there's a time traveler. The evidence in this second situation is quite a bit stronger than a personal observation, and would probably be enough to convince me.
Well, the insanity defense is always a possibility, but then again, you have no proof that you're not insane right now, either, so it seems to be a fully general counterargument that can apply at any time to any situation. Ignoring the possibility of insanity, would you see any point in refusing to update, i.e. claiming that what you just saw didn't happen?
It's always a possibility that I'm insane, but normally a fairly unlikely one.
The baseline hypothesis is (say) p = 0.999 that I'm sane, p = 0.0001 that I'm hallucinating. Let's further assume that if I'm hallucinating, there's a 2% chance that hallucination is about time travel. My prior is something like p = 0.000001 that time travel exists. If I assume those are the only two explanations of seeing a time traveler, (i.e. we're ignoring pranks and similar), my estimate of the probability that time travel exists would shift up to about 2% instead of 0.0001% -- a huge increase. The smart money (98%) is still on me hallucinating though.
If you screen out the insanity possibility, and any other possibility that gives better than 1 in a million chances of me seeing what appears to be a time traveler with what appears to be futuristic technology, yes, the time traveler hypothesis would dominate. However, the prior for that is quite low. There's a difference between "refusing to update" and "not updating far enough that one explanation is favored".
If I was abducted by aliens, my first inclination would likewise be to assume that I'm going insane -- this is despite the fact that nothing in the laws of physics precludes the existence of aliens. Are you saying that the average person who thinks they are abducted by aliens should trust their senses on that matter?
Ah. In that case, I think we're basically in agreement. To clarify: I only used the time travel as an example because that was the example that VAuroch used in his/her comment. I agree that even taking into account your observation of time travel, the posterior probability for your insanity is still much larger than the posterior probability for genuine time travel. You do agree, however, that even if you conclude that you are likely insane, the probability of time travel was still updated in a positive direction, right? It seems to me that Nominull (the person to whom I was originally replying) was implying that your probability estimate shouldn't change at all, because that's "clearly impossible"/"fictional evidence" or something along those lines. It is that implication which I disagree with; as long as you're not endorsing that implication, we're in agreement. (If Nominull is reading this and feels that I am mistaken in my reading of his/her comment, then he/she should feel free to clarify his/her meaning.)
Factually speaking, I think if you saw that happen, you would believe, regardless of your protestations now.
I don't think it's literally factually :-D
Realistically speaking?
Unfortunately this still suffers from the whole "Time Traveller visits you" part of the claim - our language doesn't handle it well. It's a realistic claim about counterfactual response of a real brain to unrealistic stimulus.
I think you're right. It's closer to, say... "serious counterfactually speaking".
I saw it as more of a warning about the limits of maps - when something happens that you think is impossible, then it is time to update your map, and not rail against the territory for failing to match it.
(Of course, it is possible that you have been fooled, somehow, into thinking that something has happened which has, in actual fact, not happened. This possibility should be considered and appropriately weighted (given whatever evidence you have of the thing actually happening) against the possibility that the map is simply wrong.)
If you've been fooled, there's still no point to calling it impossible, given that you're trying to find out what actually happened.
Hmmm.
If you were to tell me, for example, that you had seen a man flying through the air like Superman, then I think I could reasonably call that "impossible" and conclude that you were lying. (If I happened to be in Metropolis at the time, then I might soon be proven wrong - nonetheless, the conclusion that you are lying is significantly more probable than the conclusion that someone has suddenly developed the power of flight).
On the other hand... if you were to hold an object, and then let go, and that object were to fall up instead of down, then calling that "impossible" would be useless; I have seen the object fall up, I can see it there on the roof, I can walk under it. (And it is, indeed, not impossible; the object could be a helium balloon, or you might have concealed a powerful magnet in the roof and used a metal object).
...hmmm. I think the difference here is that in the first case, the thing has not clearly happened; I merely have an eyewitness report, which is easily forged, to say that it took place. In the second case, I have far more data to show that the object really did fall upwards, and I can even (perhaps with the aid of a ladder) retrieve the object and drop it myself, confirming that it continues to fall upwards; it has clearly happened, calling it "impossible" is indeed futile, and the only question is how.
I always feel a bit bad for Vizzini. His plan is very well thought-out and sensible; he's just in the entirely wrong genre for those qualities to be remotely relevant to its success.
It doesn't help that up to that point, the genre looks like one where it should work. Obviously the character's timeline could make it more obvious, but from ours it isn't.
Well, we can extract a life lesson from this: make sure that having well thought-out and sensible plans is actually relevant to success in the context you're operating in :-/ Go meta if needed :-)
But the map is the map...
Lampshading mysterious answers:
-- Dirk Gently's Holistic Detective Agency, Douglas Adams
Thomas W. Myers in Anatomy Trains - Page 3
-- Windmill, PartiallyClips
A good one, but a duplicate.
-- Science, Common Sense and Reality by Howard Sankey
For those curious the paper uses Arthur Eddington's two tables metapher which is also nice to illustrate this:
I find it entertaining that no matter how weird the deep scientific explanation is, that explanation can only be developed by scientists who have a naive sensory relationship with their instruments. They have to handle the instruments (or the computer controls) as though their hands and tools are made of solid stuff moving at easy-to-perceive speeds.
That did cause some problems with quantum physics, when they assumed that their measuring equipment along with the scientists themselves weren't getting stuck in quantum superposition.
Brandon Sanderson
has anyone been keeping a reading list selecting exclusively for heroes with awesome schemes?
The outline of the Hero's journey calls for the story to begin with the hero in a mundane situation of normality.
Sounds to me like the cliche isn't always positive.
That's often true, but there are counter-examples, like my all time favorite : the Foundation cycle. In it, especially the beginning of it (the Foundation novel and the prequels), it's truly the heroes who are doing something awesome - the Foundation and all what's associated to it - and the villains who try to prevent them (and even that is more complicated/interesting as simple "vilain").
It's also often the case in Jules Verne fiction, or in the rest of "hard scifi", be it about trans-humanism (permutation city for example) or about planetary exploration.
The trope is Villains Act Heroes React, and the Foundation stories don't actually defy this AFAIC recall.
It does in various points of the saga, some examples I can give easily, other are spoilers so I'll ROT13 them.
In the first tome and the prequels, it's Harry Seldon who tries to develop pyschohistory and setup the Foundation, and different "villains" react to that. It's true that afterwards the Foundation is mostly reacting to Seldon Crisis, but those crisis are part of Seldon's Plan (so, of the hero planning ahead awesome things).
In the last tome, Foundation and Earth, it's clearly the heroes who start their own quest of finding back the Earth.
Now the spoiling parts (rot13) :
Va gur cerdhryf vg'f pyrneyl Qnarry jub gevrf gb chfu Fryqba gb qrirybc cflpubuvfgbel, naq Qnarry vf gur erny "ureb" bs gur rkgraqrq Sbhaqngvba-Ebobg plpyr.
Va Sbhaqngvba'f Rqtr, juvyr gur znva ureb vf vaqrrq ernpgvba gb orvat chfurq ol inevbhf punenpgre, vg'f abg ivyynvaf jub ner cynaavat gur jubyr riragf, ohg Tnvn, jub vf n cebqhpg bs Qnarry, fb ntnva, bs gur erny "nepu ureb" bs gur fntn.
There are other similar examples in other parts of the cycle, but less obvious ones.
A counterexample to the initial claim, which is probably more true of epic fantasy than of fiction generally: In Ayn Rand's fiction, it is indeed the heroes who have great and awesome schemes; the villains just want to wet their beaks, or to stop people from doing great and awesome things, depending on how villainous they are.
It's not clear to me that this is a counterexample. Ayn Rand's fiction strikes me as mediocre in general, but what strength it has seems to flow from following this principle.
[edit]I seem to have misread the parent, and am agreeing with it.
At least one of us is misreading the other's comment: I was suggesting Rand's fiction as a counterexample to
which seems to agree with, not be contradicted by, your "flow[s] from following this principle".
Ah, yes. I missed the "initial claim" bit, and thought you meant this was a counterexample to Sanderson's whole claim.
It might be more accurate to say that Ayn Rand's heroes start with grand and awesome schemes. There's a lot of speechifying in between, but in terms of action they always seem to degenerate into some form of "screw you guys, I'm going home" by the end.
I haven't read it for a long time, but I remember thinking that the first third of Atlas Shrugged is a much better book than the whole thing -- because up to that point, it's a novel about building something great in the face of adversity, and after that the adversity wins and it becomes a novel about spite and destruction on all sides. Also because it's way too long for its plot, but never mind that.
The dialogues in the film versions of Atlas Shrugged always felt bland and lame to me until I realized that the "good ones" were saying their lines as "good ones." When I read the book, I felt instinctively drawn to imagining the "good ones" saying their lines as "villains." When you read Dagny as the villain, her dialogues feel much more potent.
I see this for John Galt and to a lesser extent d'Anconia, and basically not at all from Dagny.
"If you think that I need your men more than they need me, choose accordingly. If you know that I can run an engine, but they can't build a railroad, choose according to that. Now are you going to forbid your men to run that train?"
"I didn't say we'd forbid it. I haven't said anything about forbidding. But... but you can't force men to risk their lives on something nobody's ever tried before."
"I'm not going to force anyone to take that run."
That was the moment when I first sensed she was the villain. She knows the construction of the line involves uncertain and untested safety conditions, so she won't "force" anyone to work there, because she doesn't need to: she knows the workers need the job anyway, and they have actually very little choice. You can clearly feel the implied manipulation behind her statement.
I remember reading that section entirely differently, but it's been a few years and so my memory might be off. I got the feel that people were jockeying for the honor of being on the first run, because it was exciting, and so employing force to find workers is entirely unnecessary.
Really? Perhaps I should reread at least some of Atlas Shrugged from that angle, but I don't see how wanting to run a railroad competently can be read as villianous.
Pretend to be a radical environmentalist or something.
-- Bill James, American baseball writer and statistician.
That strikes me as really ... odd.
To whom is the advice addressed? If something is actually untrue, and one has determined it to be untrue, then the task of being skeptical about it is finished.
I could probably find a loophole in the preceding statement, but it couldn't possibly be what Bill James was referring to.
As for directing skepticism at [claims depending upon] things that are difficult to measure, well that seems like one step away from directing skepticism at claims depending on little evidence. Which is surely what we want to do. Again, there's a loophole, but clearly not something Bill James was trying to point out.
Scepticism is directed not at things, but at claims. And claims about things difficult to measure should face increased scepticism.
Interesting position! I can't speak for James, but I want to engage with this. Let's pretend, for the scope of this thread, that I made the statement about the proper role of skepticism.
I'm happy to endorse your wording. I agree it's more precise to talk about "claims" than "things" in this context.
Quick communication check. When you say "increased" you're implying at least two distinct levels of skepticism. From your assertion, I gather that difficult-to-measure claims like "there exist good leaders, people who can improve the performance of the rest of their team" will face your higher level of skepticism.
Could you give me an example of a claim that faces your lower level of skepticism?
Well, I'm actually treating scepticism as a continuous variable, let's say defined on non-negative real numbers for simplicity, where 0 means "I Believe!" and some sufficiently high number means "You're lying".
"It's raining outside"
"This thing weights five pounds"
"Free-falling objects start to accelerate by about 9.8 m/s/s"
-- a member of the scientific collaboration I'm in.
Also more than have died from UFAI. Clearly that's not worth worrying over either.
I'm not terrified of Ebola because it's been demonstrated to be controllable in fairly developed counties, but as a general rule this quote seems incredibly out of place on less wrong. People here discuss the dangers of things which have literally never happened before almost every day.
And for all the scaremongering over so-called 'dinosaur killer' asteroids, total casualties are very low (even including Chelyabinsk)!
Marriage to Kim Kardashian is not contagious. The danger of Ebola is not to be measured by how many it has killed, but how many it may kill.
If you prefer another comparison, here is one.
Drone strikes aren't contagious either. (Come to think of it, is the original quote actually true? One U.S. citizen notably died of Ebola in the U.S. How have those working with Ebola victims in Africa fared?)
The point being, that the original quote and this one are nonsensical comparisons. The only way for people in the U.S. (whether they are citizens or not) to be safe from Ebola is for people with Ebola to be prevented from entering; if found to have entered, to be isolated; if found to have been contagious before isolation, for their contacts to be found. I gather from the news that this is, more or less, being done, in spite of people protesting, in effect, "we are safe, therefore precautions are unnecessary".
But when the people are safe, they do not see the use of the things that keep them safe.
(Sorry, coming to this thread rather late.) Is, or was, anyone actually saying anything that amounted to "we are safe, therefore precautions are unnecessary"? What I've heard people saying is more like "we are safe enough with our current level of precautions, therefore such-and-such an extra precaution is unnecessary". Or "... therefore there is no need for us to panic about the danger we face from Ebola".
Not that I can point to. I may just be pattern-matching.
Which pattern-matches to raise the question, do people saying that know what the current precautions are?
If there's good enough evidence that we're safe enough as we are, I think it's possible to say it without knowing what the current precautions are. (Just as someone can say "my computer is fast enough for what I use it for" even if they have no idea of its clock speed, memory latency, instruction set architecture, etc.)
I know what I would expect to observe if my computer weren't fast enough (even in the absence of looking at technical specs), but I don't know what I would expect to notice if I were safe in the absence of actually looking at the precautions that are being taken.
The closest thing I can come to that is observing that nothing disastrous has happened yet, but that's not especially well-correlated with actual safety.
So... what kind of evidence are you envisioning here?
I'm envisioning observing that nothing even very bad has happened yet, which I think is in fact pretty well correlated with actual safety.
It's not the same thing, for sure. But it's probably all we have, on any side of the debate, and it seems to me to support the "no need for panic" side better than the "lock down all the borders and quarantine everyone arriving from Africa" side.
OK. Thanks for clarifying.
The classic counter-example involves a turkey in the middle of November. Nothing very bad has happened to it ever -- must be very safe, then...
This has the Chesterton's post problem. What do you mean by "our current level of precautions"? Do they include the existing provisions for quarantine in case of emergencies?
They include whatever is being done now. Which appears to be something like: don't try to block or delay entry from affected countries wholesale; get people arriving from places affected by Ebola to monitor themselves for a while after travelling and take appropriate action if they suspect infection; etc.
This all seems to be working OK.
Of course the situation could change in ways that justified large-scale quarantining, but I'm not aware of any reason to expect that it will.
Patrick Sawyer was a US citizen. Skimming http://en.wikipedia.org/wiki/Ebola_virus_epidemic_in_West_Africa doesn't reveal any others, but we only need one more to tie with Kim.
Edit: actually, it looks like Duncan wasn't a US citizen.
They kinda are :-D First by physical proximity at the moment of the strike (you get to be called "collateral damage"), and second, via the "association with suspicious persons" method.
Are you, by any chance, looking for absolute safety? It tends to be very expensive to achieve and even then fails often enough.
If we are talking about "driving the risks from Ebola to the general background risk level", well, at the moment it's well below that level.
IAWYC, but in general the right thing to do is to reduce the risk until the marginal cost of reducing it more exceeds the disutility of what one is risking: for example, if I can spend one cent to reduce the probability I'll die tomorrow by 1e-7 (e.g. by not being as much of a jackass while driving) I should do so, even though the general background risk level (according to actuarial tables for my gender, age and province) is more than an order of magnitude larger.
Not necessarily. The reduction may have positive value in absolute terms, but carry the opportunity cost of preventing you from devoting those resources to more valuable risk reductions.
I don't think you've just disagreed. When I say something has a marginal cost of $2.50, that doesn't mean I'm considering the sadness inherent in having fewer shiny metal discs and green pieces of paper, it means there's some opportunity cost which that money could have afforded which I would instead have to forgo.
Theoretically. In practice you're unlikely to be able to evaluate the risks with the necessary accuracy.
Are you saying that because precautions are in place, the risk is being kept below that level, or that because the risk is below that level, precautions need not be taken? The first is fine, the second is not.
It's a feedback loop: observe the current state and the dynamics, adjust as needed.
I hope not. Because Richard's proposal doesn't provide that. Especially when 'drone strikes' have already been brought up in conversation. Sure, most of the remaining risks would sound about as realistic as a plot from a season of 24 but as you say this is a threat well below general background risk level so implausibility is expected.
As far as we know! Perhaps it simply has a long incubation period, and transitive polyamory will be legally recognized some time in the 2020s.
Hmm. Will I become a Mormon before or after I am married to Kim Kardashian?
--Bogdan M (emphasis mine)