Rationality Quotes February 2013
Another monthly installment of the rationality quotes thread. The usual rules apply:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments or posts from Less Wrong itself or from Overcoming Bias.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (563)
-- Scott Sumner (talking about Italian politicians when the EU controls their monetary policy, but it generalizes)
-- Chad Fowler (from The 4-Hour Body)
(Joseph Heath & Andrew Potter, The Rebel Sell)
Sun Tzu on establishing a causal chain from reality to your beliefs.
Dupe.
"We're even wrong about which mistakes we're making."
-Carl Winfeld
William Deseriewicz
The whole speech is worth reading as one giant rationality quote
Not bad, although it seems to equate originality with goodness a little too much.
-- From the final screen of Call of Cthulhu: The Wasted Land
...Hooray for the phygists?
Well, there are lots of cultists running around trying to summon an Elder God. This will almost certainly end in disaster. The options we have to fight this are: a) We can try to stop all Elder-God-summoning related program activities or b) We can try to get there first and summon a Friendly Elder God.
Both a) and b) are almost impossibly difficult and I find it hard to decide which is less impossible.
-- C. S. Lewis, Out of the Silent Planet
Linus Pauling
It's necessary, but not sufficient.
The example in the comic is not a good one. Of the choices on the board, E being proportional to mc^2 is the only option where the units match. You only need to have that one idea to save yourself the trouble of having lots of other ideas.
It's a joke, which I assume is intended for a mostly non-physicist audience.
We demand complete rigour from all forms of levity! The unexamined joke is not worth joking!
Yes, but also being able to tell which of those ideas are good is even better.
From the alt-text in the above-linked comic:
Karl Popper
There's a failure mode associated to this attitude worth watching out for, which is assuming that people who disagree with you are being irrational and so not bothering to check if you have arguments against what they say.
Bryan Caplan
This sounds almost horrifically dystopian, in a sort of Friendship is Optimal way.
Ozy Frantz - Brain Chemicals are not Fucking Magic
-- Martin Fowler
--Jovah's Angel by Sharon Shinn
@slicknet
With apologies for double-commenting: "Don't assume others are ignorant" is likely to be read by a lot of people (including myself at first) as "Aim high and don't be easily be convinced of an inferential gap". Posts on underconfidence may also be relevant.
I would somewhat agree with this if the phrase "making mistakes" was removed. People generally have poor reasoning skills and make non-optimal choices >99% of the time. (Yes, I am including myself and you, the reader, in this generalization.)
In most situations there are multiple people other than yourself who each think the others are dumb, ignorant and making mistakes. Don't assume that the one you happen to be interacting with at the moment is right by default.
If we are in the business of making assumptions, there is no dichotomy, you can as well consider both hypotheticals. (Actually believing that either of these holds in general, or in any given case where you don't have sufficient information, would probably be dumb, ignorant, a mistake.)
This misses the point a bit due to an equivocation on "assume". In ordinary discourse, it usually means "assume for the purpose of action until you encounter contrary evidence". That's very different from the scientist's hypothetical assumptions that are made in order to figure out what follows from a hypothesis.
It's epistemically incorrect to adopt a belief "for the purpose of action", and permitting "contrary evidence" to correct the error doesn't make it a non-error.
I think what Creutzer is trying to mean is in ordinary discourse meaning everyday problems in which you are not always able to give the thought time it deserves, when you don't even have 5 minutes by the clock hand to think about the problem rationally, it is better to rely on the heuristic assume people are smart and some unknown context is causing problems then to rely on the heuristic people who make mistakes are dumb. this said heuristics are only good most of the time and may lead you to errors such as
in this case it is still technically an error but you are merely attempting to be "less wrong" about a case where you don't have time to be correct then assuming the heuristic until you encounter contrary evidence (or you have the time to think of better answers) follows closely the point of this website
Using a heuristic doesn't require believing that it's flawless. You are in fact performing some action, but that is also possible in the absence of careful understanding of the its effect. There is no point in doing the additional damage of accepting a belief for reasons other than evidence of its correctness.
Exactly, thanks for the clarification.
I believe that this statement, while correct, misses the point of preemptive debiasing. Yvain said it better.
Also, consider the possibility that it is you who is dumb, ignorant, and making mistakes.
I don't consider it, I assume it.
But "dumb" and "ignorant" are not points on a line, they are relative positions.
To quote this bloke at a climbing gym I used to frequent "We all suck at our own level".
-Alex Tabarrok
One amusing aspect is that assuming the person is justified in their belief that their church/country is ethical, the above is a valid inference.
Not necessarily. You don't punish people based on their likelihood of being guilty but based on severity of their crime.
If torture is used as tool to gain information instead of being used to punish it's even more questionable whether the likelihood of being guilty correlates with the severity of the torture. The fact that someone decides to torture to get more information suggests that they have an insuffienct amount of information.
If there a 50% chance that a person has information that can prevent a nuclear explosion, you can argue that it's ethical to torture to get that information.
After the bomb has exploded and you know for certain who did the crime, there not much need to torture anyone.
An interrigator that tortures is more likely to get false confession that implicate innocents. If he then goes and tortures those innocents, you see that people who torture are more likely to punish innocents than people who don't.
— Gaston Leroux
Only with very low probability.
and the human mind loves to find patterns even when the probabilities of the pattern being a rule are low. Coincidences are correlation.
[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
Klingon proverb.
So it's true what they say! The opposite of a Klingon proverb is also a Klingon proverb...
-- Randall Munroe
Definitely a double, but I can't link the others right now.
I thought that unlikely, because it's from last week's XKCD What If?
Maybe Randall has said it before (or borrowed it from someone else).
Earlier posting
Faramir, from Lord of the Rings on lost purposes and the thing that he protects
Except that a non-overwhelming love of a useful art may help you become better in the art, even though you would switch to another if it helped you optimize more.
-- Noah Brand
I'd prefer if this quote ended with " ... and then I got done weeping and started working on my shoe budget," but oh wells.
Generally speaking, bigger problems tend to be cheaper to solve (i.e. solving them will yield more utilons per dollar); so if there is a painting in a museum that risks being sold, and there are people that risk dying from malaria, the existence of latter is a good indication that worrying about the former isn't the most effective use of a given amount of resources. (“Concentrate on the high-order bits” -- Umesh Vazirani.) But in this particular case, that heuristic doesn't seem to work (unless I'm overestimating the cost of prosthetics).
That's really the entire point of the original quote that this quote is making fun of. The difference between the original and this one is that the author of the second has not updated his baseline expectation that he should have shoes, and that something is wrong if he doesn't.
Our baseline expectations determine what we consider a "loss", in the prospect theory sense, so if seeing someone else's problem helps you reset your baseline, it actually is a way to help you stop weeping and start working on the budget, as it were. What we call "getting perspective" on a situation is basically a name for updating your baseline expectation for how reality "ought to be" at the present moment.
(That isn't a perfect phrasing, because English doesn't really have distinct-enough words for different sorts of "oughts" or "shoulds". The kind I mean is the kind where reality feels awful or crushingly disappointing if it's not the way it "ought" to be, not the kind where you say that ideally, in a perfect world, things ought to be in thus and such a way, but you don't experience a bad feeling about it right now. It's a "near" sort of ought, not a "far" one. Believing the future should be a certain way doesn't cause this sort of problem, until the future actually arrives.)
I agree that resetting your baseline is often important if you think that your lack of shoes is a soul-crushing awfulness. This quote is mainly arguing against the attitude that says "you have feet therefore your shoe problem is a non-problem, don't even bother feeling bad or working on it". It's comparatively very minor, but it should be fixed just like any other problem. This quote is arguing against resetting your baseline to the point where minor problems get no attention at all.
That may be, but the actual context of the quote it's arguing with is quite different, on a couple of fronts.
Harold Abbott, the author of the original 1934 couplet ("I had the blues because I had no shoes / Until upon the street, I met a man who had no feet"), wrote it to memorialize an encounter with a happy legless man, at a time when Abbott was dead broke and depressed. (Abbott was not actually lacking in shoes, nor the man only lacking in feet, but apparently in those days people took their couplet-writing seriously. ;-) )
Thing is, at the time he encountered the legless man (who smiled and said good morning), Abbott was actually walking to the bank in order to borrow money to go to Kansas City to look for a job. And not only did he not stop walking to the bank after the encounter, he decided to ask for twice as much money as he had originally intended to borrow. He had in fact raised his sights, rather than lowering them.
That is, the full story is not anything like, "other people have worse problems so STFU", but rather that your attitude is a choice, and there are probably people who have much worse circumstances than you, who nonetheless have a better attitude. Abbott wrote the couplet to put on his bathroom mirror, as an ongoing reminder to have a positive outlook and persist in the face of adversity.
Which is quite a different message than what Noah Brand's snarky quip would imply.
I think the problem that people are having with the quote is that it doesn't actually contain the full story, and when it is repeated outside that context, the meaning they get from parsing the words is "other people have worse problems so STFU", and it's not a good idea to go around repeating it if people are going to predictably lack the context and misinterpret it.
This. If only people realized that unpleasant facts do not cancel each other out, and pointing out one unpleasant fact in addition to another should never ever make us feel better, because it only leaves us in a worse world than we started out in. Compute the actual utilities. It's such a common and avoidable error.
I think both your comment and the quote are forgetting the instrumental purpose of crying and/or feeling bad.
I can't say I see your point. Mind explaining?
My guess: The purpose of crying is to make people around you more likely to help you.
So if you don't have shoes, there is a chance that crying in public will make someone give you money to buy the shoes. But if there is a person without feet nearby, your chances become smaller, because people will redirect their limited altruist budgets to that other person. Your crying becomes less profitable.
I think people just accidentally conflate keeping problems in perspective with the idea that the existence of bigger problems makes the small problems negligible and therefore equivalent to non-problems.
I've seen this happen with positive things too; sometimes you won't mind repeatedly doing small favors for someone and they start acting like you not minding means the favor is equivalent to doing nothing from your perspective, which is frustrating when your small but non-zero effort goes unacknowledged.
It's sort of like approximating sinθ as 0 for small angles. ^_^
Yep. Most people seem to behave as though the choice between spending $5 and spending $10 is a much bigger deal than the choice between spending $120 and spending $125, but if anything it's the other way round, because in the latter case you'll be left with less money. (That heuristic does have a point for acausal reasons analogous to these insofar as you'll have to make the first kind of choice much more often than the second, but people will still behave the same way in one-off situations.)
Another possible motivation for that heuristic: something that's a good buy for $5 might well be a bad buy for $10, but something that's a good buy for $120 is probably still a good buy for $125. If I find that a cheap item's twice the cost I thought it was, that's more likely to force me to re-do a utilitarian calculation than if I find an expensive item is 4% pricier than I thought it was.
Yes, but OTOH if I'm about to buy something for $125 it isn't that unlikely that if I looked more carefully I could found someone else selling the same thing for $120, whereas if I'm about to buy something for $10 it's somewhat unlikely that anyone else would sell the same thing for $5 (so looking around would most likely be a waste of time), and I'd guess these two effects would more-or-less cancel out.
I can often get a $10 good/service for $5 or less if I'm willing to delay consumption or find another seller (e.g. buying used books, not seeing films as soon as they come out, getting food at a canteen or fast food place instead of a pub or restaurant, using buses instead of trains). I might be atypical.
But if you look at it other way, then pointing out unpleasant facts about other people's condition (that don't apply to us) is equivalent to pointing out good facts about our condition, which should make us feel better, as it leaves us in a better world than we started out in.
"...And then I remembered status is positional, felt superior to the footless man, and stopped weeping."
Shoes aren't just about positional social status, are they? (I mean, the difference between a $20 pair of shoes and a $300 pair of shoes mostly is, but the difference between a $20 pair of shoes and no shoes at all isn't, is it?)
I've just come across a fascinatingly compact observation by I. J. Good:
This is a beautifully simple recipe for a conflict of interest:
Considering absolute losses assuming failure and absolute gains conditioned on success, an adviser is incentivized to give the wrong advice, precisely when:
You can see this reflected in a lot of cases because the gains to an advisor often don't scale anywhere near as fast as the gains to society or a firm. It's the Fearful Committee Formula.
Which is not nearly as common as the reverse, the Reckless Adviser Formula, when the personal loss to the adviser is so low and the potential personal gain is so high, they recommend adoption even when the expected gain for the company is negative.
In general, this is referred to as the principal-agent problem.
Note that the adviser's ethical problem also exists if L/V > p/(1-p) > l/v.
Is the order also inverted in the original?
Fixed.
I. J. Good's original, which I've somewhat abridged, explicitly specifies that there are no competitors who cause visible losses/gains after the invention is rejected.
To clarify, this is a summary of what you've excluded in your quote, not a response to the other case where the ethical problem exists, correct?
It's a summary of what I excluded - I had actually misinterpreted, hence my quote indeed was not a valid reply! The other case is indeed real, sorry.
Name three?
The success of Market-Based Management / Koch Industries appears to be due at least in part to their focus on NPV at the managerial level. You get stories like (from memory, and thus subject to fuzz) the manager of a refining plant selling the land the plant was on to a casino which was moving to the area, which he was rewarded for doing because the land the plant was on was more valuable to the casino than the company, even after factoring in the time lost because the plant was shut down and relocated. The corporate culture (and pay incentive structure) rewarded that sort of lateral use of resources, whereas a culture which compartmentalized people and departments would have balked at the lost time and disruption.
(Joseph Heath, The Efficient Society)
Heath is an excellent writer on economics/philosophy.
Francis Spufford, Red Plenty
(Sorry, I couldn't resist.)
Studies show that people who try to run behind a car frequently fail to keep up, while nobody who runs in front of a car fails more than once.
Randall Munroe, on updating on other people's beliefs.
The " every single person I know, many of them levelheaded and afraid of heights, abruptly went crazy at exactly the same time" scenario should be given some credence in human society; there is such a thing as puberty. The definition of puberty being " every single person I know abruptly went crazy at exactly the same time, including me".
Dilbert dunnit first!
(Seeing that strip again reminds me of an explanation for why teenagers in the US tend to take more risks than adults. It's not because the teenagers irrationally underestimate risks but because they see bigger benefits to taking risks.)
See also this Will_Newsome comment. (I incorrectly remembered that it said something like “If all your friends jumped off a bridge, would you jump too?” “If all of them survived, I probably would.”)
Let me just put the text string ‘xkcd’ in here, because I was going to add this if nobody else had, and it's lucky that I found it first.
Oh, and there's more text in the comic than what's quoted, and it's good too, so read the comic everybody!
Klingon proverb.
From this recent talk
Aubrey de Grey being an immortalist himself, I'm assuming the irony to be unintentional?
Haha, didn't occur to me until I read your comment, so there's one data point for you.
/clicks link, watches
... I can barely understand a single word this guy is saying. Is it just me or is the audio in that video really bad? I don't suppose it was transcribed anywhere?
I cannot express how true this is, at least not without a lot of swear words.
I'm confused. I thought that deathpigeon's quote was downvoted because it was anti-deathism and not rationality, but this quote is similar in that way and it has lots of upvotes. Was deathpigeon's quote actually downvoted because it incorrectly attributed a line to ASoIaF instead of Game of Thrones? Seriously?
Or perhaps there are more criteria (aesthetic, informational, other) by which these quotes may be judged than whether they are anti-death or not.
And that other quote is neither ASoIaF nor TV series, it's a misquotation.
I wouldn't think so, but I wasn't expecting five upvotes on my comment saying so, either. Maybe we really are that pedantic.
This is only incidentally anti-deathist, though; its substance has more to do with popular reactions to controversial ideas. Which doesn't seem all that shiningly rational to me either, but perhaps I'm missing something.
Or we all secretly love anti-deathist quotes, and only downvote them when they have no rationality content because we feel it's our duty, but when we see one that can be interpreted as slightly rationalist, we seize the excuse to upvote it. Or our liking for a quote based on its anti-deathism enhances our appreciation for its insight into rationality, via the affect heuristic.
-Joel Spolsky
And by the same author:
and
(because what counts after getting it out the door is how many people actually use it.)
That's Jeff Atwood. The quote is from Joel Spolsky. While the two both work together on Stack Exchange, they're different individuals.
-- Steve Jobs
(The Organization Formerly Known as SIAI had this problem until relatively recently. Eliezer worked, but he never published anything.)
And they ship the characters the fans want.
I would have quoted more, because on reading that out of context I was like “YOU DON'T SAY?”
Most people, when giving advice, don't optimize for maximal usefulness. They optimize for something like maximal apparent-insight or maximal signaling-wisdom or maximal mind-blowing, which are a priori all very different goals. So you shouldn't expect that incredibly useful advice sounds like incredibly insightful, wise, or mind-blowing advice in general. There's probably a lot of incredibly useful advice that no one gives because it sounds too obvious and you don't get to look cool by giving it. One such piece of advice I received recently was "plan things."
There's probably also a lot of useful advice that our minds filter out because it scans as obvious or trivial. Even when I'm trying to give maximally effective advice, I usually spend a lot of effort optimizing it for style; the better something sounds, the more people dwell on its implications and the likelier it is to stick. Fortunately, most messages leave plenty of latitude for presentation.
Alternately, you could try dressing simple advice up in enough cultural tinsel that it looks profound, as suggested here.
Well, a lot basic rationality literally seems to be about doing what is almost obvious but is hard to do because of bugs in your cognitive architecture. This reminds me of the following quote by Elon Musk in an interview where he was asked what he would say to new start-up founders:
If your service is down, it has no features.
And no bugs.
Well, there is one pretty major bug: That your service is not doing anything at all!
It has all the bugs. All of them.
(Well, not really. For instance, it doesn't have any security holes.)
If it bears any resemblance to a product at all, your own admin-level access constitutes a potential security hole.
It's a feature.
-- Screwtape, The Screwtape Letters by C.S. Lewis
I kind of wish people did use the future more, sometimes. For example, in Australia at the moment, neither major political party supports gay marriage. And beyond all the direct arguments for/against the concept, I can't help but wonder if they really expect, in 50 years time, that we will live in a world of strictly hetrosexual marriages. What are they possibly hoping to achieve? Maybe that reasoning isn't the best way to decide to actively do a thing, but it surely counts towards the cessation of resistance to a thing.
Being elected at some point in the next 3 years. They aren't trying to achieve anything related to homosexual marriages. They don't care.
Um, I know this is classic Hansonian "X is not about X" cynicism, but I doubt it's actually true of most politicians. Sure, the need to get elected skews their priorities, but they do have policy preferences, which they are willing to pursue at cost if necessary.
Here are a few things that have at one time or another been considered "obviously inevitable":
The spread of enlightened dictatorship on the Prussian model.
The spread of eugenics.
The control of the world economy by "rational" central planners.
My point is that you appear to be overestimating how well you can predict the future.
I don't think you really believe this argument. In particular if the success of something you opposed seemed inevitable, you'd still oppose it.
What I think is happening is that you support the "inevitable" outcome but are getting frustrated that the opposition just won't go away like they're "supposed" to.
FWIW, 20 years ago (when my now-husband and I first got together) I expected that I would live in a world of strictly heterosexual marriages all my life.
That didn't incline me to cease my opposition to that world.
So I can empathize with someone who expects to live in a world of increasing marriage equality but doesn't allow that expectation to alter their opposition to that world.
-Luc de Clapiers
—Yagyū Munenori, The Life-Giving Sword
Been making a game of looking for rationality quotes in the super bowl
"It's only weird if it doesn't work" --Bud Light Commercial
Only a rationality quote out of context, though, since the ad is about superstitious rituals among sports fans. My automatic mental reply is "well that doesn't work"
Well, but in the universe of the commercials, it clearly did, so long as you went to the appropriate expert.
Good observation. I will accept your correction: It's only weird if it doesn't work, and it doesn't work unless you're in Stevie Wonder's presence
Joke: a tourist was driving around lost in the countryside in Ireland among the 1 lane roads and hill farms divided by ancient stone fences, and he asks a sheep farmer how to get to Dublin, to which he replies:
"Well ... if I was going to Dublin, I wouldn't start from here."
Moral, as I see it anyway: While the heuristic "to get to Y, start from X instead of where you are" has some value (often cutting a hard problem into two simpler ones), ultimately we all must start from where we are.
S. T. Rev
Men in Black on guessing the teacher's password:
Zed: You're all here because you are the best of the best. Marines, air force, navy SEALs, army rangers, NYPD. And we're looking for one of you. Just one.
[...]
Edwards: Maybe you already answered this, but, why exactly are we here?
Zed: [noticing a recruit raising his hand] Son?
Jenson: Second Lieutenant, Jake Jenson. West Point. Graduate with honors. We're here because you are looking for the best of the best of the best, sir! [throws Edwards a contemptible glance]
[Edwards laughs]
Zed: What's so funny, Edwards?
Edwards: Boy, Captain America over here! "The best of the best of the best, sir!" "With honors." Yeah, he's just really excited and he has no clue why we're here. That's just, that's very funny to me.
The scene in question.
That whole testing sequence is one of the best examples in film of how to distinguish what's expected of you from what's actually a good idea.
(Or in that specific case, what seems to be expected of you.)
-- Milton Friedman
This solution only works if you are in the special position of being able to make institutional design changes that can't be undone by potential future enemies. Otherwise, whose "right things" will happen depends on who is currently in charge of institutional design (think gerrymandering).
Then try to make it politically profitable to help sustain those changes you make. Make it so painfully obvious that the only reason to remove those changes would be for one's unethical gain that no politician would ever do so. The problem then though, is that people end up just not caring enough.
What you're describing is exactly the position of being able to make institutional design changes that can't be undone by potential future enemies. This position is "special" not only because the task is very difficult, but also because you have to be the first to think of it.
-- Bertold Brecht
(I'm always amused when people of opposite political views express similar thoughts on society.)
Also:
I think the Brecht quote is somewhat misleading. The problem is not that not enough people want/demand goodness, the problem is that it is too easy to profit by cheating without getting caught.
—Mike Sinnett, Boeing's 787 chief project engineer
Isn't the point of the article that Boeing may not have actually done at least the first two steps (design cell not to fail, prevent failure of a cell from causing battery problems)?
I am confused.
It's the point of the problem, anyway.
SInnett is probably a very good designer, but the battery design was outsourced.
-- John C Wright
That reminds me of http://xkcd.com/690/.
Also:
-- Raymond Arritt
(Quoting this before dinner is making me hungry.)
Wikipedia may ultimately have to do one of two things, or both:
1) Provide better structure for alternate versions of contested ideas
2) Construct a practically effective demarcation between strictly factual domains, and anything more interpretive.
Such a demarcation will always be challenged; I don't see any way around that, but I'd also insist that it's necessary for our sanity. Supposed it was possible, maybe using a browser with links to a database, to try to "brand" (or give the underwriters seal of approval to) those pages that provided straightforward factual assertions, and unretouched photographs, and scans of original source texts, such as all newspapers of which a copy still exists), and to promote the idea that the respectability of any interpretive or ethical claim consists very largely in its groundedness in showing links to the "smells like a fact" zone.
Several versions with explicit labeling of which viewpoint it represents would be a huge step in improving general information retrieval. Hypertext in general was obviously a huge leap, but the problem of presenting the evolution of a school of thought on a particular subject has not been solved satisfactorily IMO. Path dependence of various things is still among the information we regularly do not record/throw away. We should not be reliant upon brilliant synthesists taking interest in each subject and writing a well organized history.
-- Time Braid
Eckhart Tolle, as quoted by Owen Cook in The Blueprint Decoded
On scientists trying to photograph an atom's shadow:
Luke McKinney - 6 Microscopic Images That Will Blow Your Mind
Insultingly Stupid Movie Physics' review of The Core
See also the extra panel (hover onto the red button) in yesterday's SMBC comic.
... I had not known about red buttons on SMBC.
roll d20... success on 'resist re-binge' check.
32 people in the same ten block radius simultaneously dying of malfunctioning pacemakers seems so tremendously unlikely, I can't imagine how one could even locate that as an explanation in a matter of seconds.
If I recall correctly, he also pointed out that the fact they had invited two experts on magnetic fields was also a strong clue.
Also from the review:
Unless the 32 people used the same, or very similar, pacemakers, and somebody forgot to say that.
Still sounds extremely unlikely. If a model of car has a particular design flaw, you'll expect to hear a lot of reports of that model suffering the same malfunction, but you wouldn't expect to hear that dozens of units within a certain radius suffered the same malfunction simultaneously. You'd need to subject them all to some sort of outside interference at the same time for that sort of occurrence to be plausible, and an event of that scale ought to leave evidence beyond its effect on all the pacemakers in the vicinity.
-- Magnificent Sasquatch
Devine and Cohen, Absolute Zero Gravity, p. 96.
So, uh, what's the explanation?
The story appears to be apocryphal. I've heard many versions of it associated with various famous scientists. The source quoted is a collection of jokes, with very low veracity. Additionally, there are no independent versions of the story anywhere on Google. By the way, the quoted date of Sommerfeld's death is also incorrect. I wonder if there even were (unpowered) ceiling fans in Munich's trolleys during that time.
Good point. Effects that don't exist don't need to be explained.
I'm not much of an engineer, but based on my understanding of their design from the description given, I can't see how they would even contribute to their alleged purpose.
It's an interesting story, but it might not be as silly as it sounds if one considers "ease of explanation" as a metric for how much credence one's model assigns to a given scenario. (Yes, I agree this is a hackneyed way of modeling stuff.)
Unfortunately, this seems to be the default way humans do things.
Well, the world is a complicated place and we have limited working memory, so our models can only be so good without the use of external tools. In practice, I think looking for reasons why something is true, then looking for reasons why it isn't true, has been a useful rationality technique for me. Maybe because I'm more motivated to think of creative, sometimes-valid arguments when I'm rationalizing one way or the other.
-Yevgeny Yevtushenko
I wonder if we'll ever learn to reconstruct people-shadows from other people's memories of them. Also, whether this is a worthwhile thing to be doing.
It's a little creepy the way Facebook keeps dead people's accounts around now.
Relevant: Greg Egan, "Steve Fever".
I imagine that depends on what we're willing to consider a "person-shadow".
Any thoughts on what your minimum standard for such a thing would be?
For example, I suspect that if we're ever able to construct artificial minds in a parameterized way at all (as opposed to merely replicating an existing mind as a black box), it won't prove too difficult thereafter, given access to all my writings and history and whatnot, to create a mind that identifies itself as "Dave" and acts in many of the same ways I would have acted in similar situations.
I don't know if that would be a worthwhile thing to do. If so, it would presumably only be worthwhile for what amount to entertainment purposes... people who enjoy interacting with me might enjoy interacting with such a mind in my absence.
I occasionally have dreams about people who have died in which they seem really real, where they're not saying stuff they've said when they were alive but stuff that sounds like something they would say. But it's not profound original thoughts or anything? So I think what I'm thinking is pretty close to what you're describing.
I guess if we can make one of these, then we could see how different people's mental models of that person were? Probably there is stuff in my mental model that I can't articulate! Stuff that's still useful information!
But maybe people will start using these instead of faking their deaths if they wanted to run away.
I've suspected -- though we're talking maybe p = 0.2 here -- for a while that our internal representations of people we know well might have some of the characteristics of sapience. Not enough to be fully realized persons, but enough that there's a sense in which they can be said to have their own thoughts or preferences, not fully dependent either on our default personae or on their prototypes. Accounts like your dreams seem like they might be weak evidence for that line of thought.
Authors commonly feel like the characters they write about are real, to various extents. On the mildest end of the spectrum, the characters will just surprise their creators, doing something completely contrary to the author's expectations when they're put in a specific scene and forcing a complete rewrite of the plot. ("These two characters were supposed to have a huge fight and hate each other for the rest of their lives, but then they actually ended up confessing their love for each other and now it looks like they'll be happily married. This book was supposed to be about their mutual feud, so what the heck do I do now?") Or they might just "refuse" to do something that the author wants them to do, and she'll feel miserable afterwards if she forces the characters to act in the wrong way nevertheless. On the other end of the spectrum, the author can actually have real conversations with them going on in her head.
I'm not much of an author, but I've had this happen.
My mental character-models generally have no fourth wall, which has on several occasions lead to them fighting each other for my attention so as to not fade away. I'm reasonably sure I'm not insane.
That sounds mystical.
Nah, this doesn't require any magic; just code reuse or the equivalent. If the cognitive mechanisms that we use to simulate other people are similar enough to those we use to run our own minds, it seems logical that those simulations, once rich and coherent enough, could acquire some characteristics of our minds that we normally think of as privileged. It follows that they could then diverge from their prototypes if there's not some fairly sophisticated error correction built in.
This seems plausible to me because evolution's usually a pretty parsimonious process; I wouldn't expect it to develop an independent mechanism for representing other minds when it's got a perfectly good mechanism for representing the self. Or vice versa; with the mirror test in mind it's plausible that self-image is a consequence of sufficiently good other-modeling, not the other way around.
Of course, I don't have anything I'd consider strong evidence for this -- hence the lowish p-value.
Relevant smbc.
So, in a way Batman exists when you imagine yourself to be Batman? Do you still coexist then (since it is your cognitive architecture after all)?
I'd say that of course any high level process running on your mind has characteristics of your mind, after all, it is running on your mind. Those, however, would still be characteristics inherent to you, not to Batman.
If you were thinking of a nuclear detonation, running through the equations, would that bomb exist inside your mind?
Having a good mental model of someone and "consulting" it (apart from that model not matching the original anyways) seems to me more like your brain playing "what if", and the accompanying consciousness and assorted properties still belonging to you pretending what-if, not to the what-if itself.
My cached reply: "taboo exist".
This whole train of discussion started with
I'd argue that those characteristics of sapience still belong to the system that's playing "what-if", not to the what-if itself. There, no exist :-)
It might not be entirely off base to say that a Batman or at least part of a Batman exists under those circumstances, if your representation of Batman is sophisticated enough and if this line of thought about modeling is accurate. It might be quite different from someone else's Batman, though; fictional characters kind of muddy the waters here. Especially ones who've been interpreted that many different ways.
The line between playing what-if and harboring a divergent cognitive object -- I'm not sure I want to call it a mind -- seems pretty blurry to me; I wouldn't think there'd be a specific point at which your representation of a friend stops being a mere what-if scenario, just a gradually increasing independence and fidelity as your model gets better and thinking in that mode becomes more natural.
I think the best way to say it is to say that Batman-as-Batman does not exist, but Batman-as-your-internal-representation-of-Batman does exist. I most certainly agree though that the distinction can be extremely blurry.
--Sam Harris
If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.
You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.
How would this encourage them to actually value logic and evidence instead of just appearing to do so?
I think the most common human tactic for appearing to care is to lie to themselves about caring until they actually believe they care; once this is in place they keep up appearances by actually caring if anyone is looking, and if people look often enough this just becomes actually caring.
The subject's capacity for deception is finite, and will be needed elsewhere. Sooner or later it becomes more cost-effective for the sincere belief to change.
I generally agree with your point. The problem with the specific application is that the subject's capacity for thinking logically (especially if you want the logic to be correct) is even more limited.
If the subject is marginally capable of logical thought, the straightforward response is to try stupid random things until it becomes obvious that going along with what you want is the least exhausting option. Even fruit flies are capable of learning from personal experience.
In the event of total incapacity at logical thought... why are you going to all this trouble? What do you actually want?
That depends on how much effort you're willing to spend on each subject verifying that they're not faking.
That is breathtakingly both the most cynical and beautiful thing I have read all day :)
Postcynicism FTW!
People tend to conform to it's peers values.
And for that matter, to start believing what they behave as if they believe.
It's not a question of encouragement. Humans tends to want to be like the high status folk that they look up to.
Want to be like or appear to be like? I'm not convinced people can be relied on to make the distinction, much less choose the "correct" one.
Or do they want to be like those folks appear to be like?
Maybe the idea could gain popularity from a survival-island type reality program in which contestants have to measure the height of trees without climbing them, calculate the diameter of the earth, or demonstrate the existence of electrons (in order of increasing difficulty).
This reminds me of
which I believe is a paraphrasing of something Jonathan Swift said, but I'm not sure. Anyone have the original?
I don't think this is empirically true, though. Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.
Then you show me some statistics, and I change my mind.
In general, I think a supermajority of our starting opinions (priors, essentially) are held for reasons that would not pass muster as 'rational,' even if we were being generous with that word. This is partly because we have to internalize a lot of things in our youth and we can't afford to vet everything our parents/friends/culture say to us. But the epistemic justification for the starting opinions may be terrible, and yet that doesn't mean we're incapable of having our minds changed.
The chance of this working depends greatly on how significant the contested fact is to your identity. You may be willing to believe abstractly that crime rates are down and public safety is up after being shown statistics to that effect -- but I predict that (for example) a parent who'd previously been worried about child abductions after hearing several highly publicized news stories, and who'd already adopted and vigorously defended childrearing policies consistent with this fear, would be much less likely to update their policies after seeing an analogous set of statistics.
I agree, but I think part of the process of having your mind changed is the understanding that you came to believe those internalized things in a haphazard way. And you might be resisting that understanding because of the reasons @Nornagest mentions -- you've invested into them or incorporated them into your identity, for example. I think I'm more inclined to change the quote to
to make it slightly more useful in practice, because often changing the person's mind will require not only knowing the more accurate facts or proper reasoning, but also knowing why the person is attached to his old position -- and people generally don't reveal that until they're ready to change their mind on their own.
Oops, I guess I wasn't sure where to put this comment.