Rationality Quotes November 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (388)
-- Marvin Minsky
And your experiences to date, which is also a thing about reality.
-Paul Halmos
Math with Bad Drawings
From an article about Obamacare.
James Franklin, The Science of Conjecture: Evidence and Probability before Pascal
-- Peter Drucker
"It is far better to improve the [quality] of testing first than to improve the efficiency of poor testing. Automating chaos just gives faster chaos." -- Mark Fewster & Dorothy Graham, Software Test Automation
Economist and likely future chairperson of the Federal Reserve Board Janet Yellen shows the key rationality trait of being able to admit you were wrong.
Alternatively, she thought that kind of a lie would be well received. It's a widely used social skill to admit you were wrong even though you think you weren't.
Robert Burton, from On Being Certain: Believing You’re Right Even When You’re Not reminding me of Epiphany Addictions
It looked like nonsense to me. I stopped reading after a few sentences.
I'm not saying I'm immune to epiphany addiction, but I want the good stuff.
I thought it was a puzzle or riddle, so I went back and looked at it again. My first guess was that it was something to do with running, then paper airplanes (which can be made from newspaper, but not a magazine). The rock as anchor made me realize there needed to be something attached, which made me realize it was a kite.
On the other hand, I don't have any trouble seeing alternative interpretations; perhaps it's because I already tried several and came to the conclusion myself. (Or maybe it's just that I'm more used to looking at things with multiple interpretations; it's a pretty core skill to changing one's self.)
Then again, I also don't see the paragraph as infused with irreversible knowing. I read the words literally every time, and have to add words like, "for flying a kite" to the sentences in order to make the link. I could just as easily add "in bed", though, at which point the paragraph actually becomes pretty hilarious -- much like a strung-together collage of fortune cookie quotes... in bed. ;-)
:-D
The reason I posted the link to epiphany addiction was that this quote is an example of how confusion doesn't feel good (it prompted you to stop reading...), and that "sense of knowing" feels pleasant. The danger being that we have very little control over when we feel either, so the feeling of knowing is no substitute for rationality.
Thanks. I had no idea that was what you had in mind.
"The feeling of knowing" is probably worth examining in detail.
Sorry no cite, but I heard about a prisoner whose jailers talked nonsense to him for a week. When they finally asked him a straight question it was such a relief he blurted out the answer.
-- Reid Hastie & Robyn Dawes (Rational Choice in an Uncertain World)
The "Ulysses" reference is to the famous Ulysses pact in the Odyssey.
On not doing the impossible:
-Article at The Atlantic
Will Burns
Scott Adams, in How to Fail at Almost Everything and Still Win Big
-- Peter Drucker
-- David Chapman
Saying that something is ineffable and saying that nothing we can say is meaningful without the exact same shared experience are rather different things. To use your own example, comparision is possible - so we can imperfectly describe chocolate in terms of sugar and (depending on the type) bitterness, even if our audience has never heard of chocolate.
Conveniently, this allows us to roughly fathom experiences that nobody has ever had. Playwrights, for example, set out to create an experience that does not yet exist and prompt actors to react to situations they have never lived through, and through their capability to generalize they can imperfectly communicate their ideas.
-- Ian Hacking, Images of Science: Essays on Realism and Empiricism
Sean Carroll
Sometimes it's disturbing how good Sean Carrol is at articulating my thoughts. Especially when it pertains to, as above, the philosophy of science. Here's another:
Nate Silver
(h/t Rob Wiblin)
-- Sam Starfall, FreeFall #1516
--John Ciardi
-- Peter Drucker The Effective Executive
--Richard Feynman
Analects of Confucius
-- Mother Goose
Presumably, a wise implementation of this quote would consider a continuum of remedies, ranging from mild treatment of symptoms to vaccination against the possibility of ever contracting the ailment. Even if there is no cure for an ailment, there is still value in mitigating its negative effects.
--Paul Waldman Why Isn't Everyone More Worried about Me? November 11, 2013
Oh, Death was never enemy of ours!
We laughed at him, we leagued with him, old chum.
No soldier's paid to kick against His powers.
We laughed, -knowing that better men would come,
And greater wars: when each proud fighter brags
He wars on Death, for lives; not men, for flags.
-Wilfred Owen
Julien Smith
Corollary 1: Always try to be that person.
Corollary 2: If they are on offense or defense, check with yourself what you expect to gain from continuing with the debate.
Disputed. Some people are naturally on the defensive even when debating true propositions. Defensiveness though is more often a bad sign, since somebody defending a false proposition that they know on some level to be false, is more likely to try to hold territory and block opponent progress. Many advocating true propositions very commonly go on the offensive, nor is it clear to me that this is always wrong in human practice.
Nitpicking, but the quote stated that people who are on neither offensive nor defensive are people you can learn from - it didn't say that people who are on the offensive or defensive are necessarily wrong to do so.
I'm not sure that's just a nitpick. It's a mistake so common that it should probably be listed under biases. It might be a variation on availability bias-- what's actually mentioned fills in the mental space so that the cases which aren't mentioned get ignored.
One reading: "offense" as "trying to lower another's status" and "defense" as "trying to preserve one's own status". The people you can learn from are the ones whose brains focus on facts rather than status.
I'm not sure if this is relevant.
In a technical sense, of course you can learn from people on offense/defense, since they are giving you information.
--Stuart Diamond, Getting More, 2010, pp. 51-52
I was once at a meetup, and there were some people there new to LessWrong. After listening to a philosophical argument between two long-time meetup group members, where they agreed on a conclusion that was somewhere between their original positions, a newcomer said "sounds like a good compromise," to which one of the old-comers (?) said "but that has nothing to do with whether it's true... in fact now that you point that out I'm suspicious of it."
Later in the meetup, an argument ended with another conclusion that sounded like a compromise. I pointed it out. One of the arguers was horrified to agree with me that compromising was exactly what he was doing.
Is this actually a failure mode though, if you only "compromise" with people you respect intellectually? In retrospect, this sounds kind of like an approximation to Aumann agreement.
Each side should update on the other's arguments and data, and on the fact that the other side believes what it does (inasfar we can't perfectly trust our own reasoning process). This often means they update towards the other's position. But it certainly doesn't mean they're going to update so much as to agree on a common position.
You don't need to try to approximate Aumann agreement because you don't believe that either yourself or the other party is perfectly rational, so you can't treat your or the other's beliefs as having that kind of weight.
Also, people who start out looking for a compromise might be led to compromise in a bad way: A's theory predicts ball will fall down, B's theory predicts ball will fall up, compromise theory predicts it will stay in place, even though both A and B have evidence against that.
Part of intellectual debate is that you judge arguments on their merits instead of negotiating what's true. Comprosing suggests that you are involved in a negotiation over what's true instead of search for the real truth.
-Goro Shimura on Yutaka Taniyama
Jonathan Safran Foer, Extremely Loud and Incredibly Close (emphasis mine)
The "23 Enigma" is the Discordian belief that all events are connected to the number 23, given enough ingenuity on the part of the interpreter.
Apophenia.
Opportunity costs of time?
Source.
--Benjamin Franklin
--Paul Graham
Dupe.
I can't help but wondering if he's overcompensating due to a certain incident.
West Hunter
The article contains the line:
What's wrong here? 4 degrees of accuracy for brain size and no error bars? That's a sign of someone being either intentionally or unintentionally dishonest.
Quick Googling shows that there's a paper published that states that European's average cranial capacities is 1347.
Rather then describing the facts as they are he paints things as more certain than they are. I think that people who do that in an area, where false beliefs lead to people being descrimited, are in no position to complain when they some social scorn.
How meaningful are figures on brain size without figures on overall body size?
Well, he did say “about”.
That's close enough to not effect his point, or even the order. I think you're engaging in motivated continuing to avoid having to acknowledge conclusions you find uncomfortable.
Do you also apply the same criticism to the (much larger number of) people how make (much larger errors) in the direction of no difference? Also, could you taboo what you mean by "descrimited". Steelmanning suggests you mean "judged according to inaccurate priors", yet you also seem be implying that inaccurately equaliterian priors aren't a problem.
Whatever the problem with non-factually-based equality may be, it is not a problem of discrimination, so the same criticism does not apply.
This gets back to the issue that neither you nor Christian have defined what you mean by "discrimination". I gave one definition: "judged according to inaccurate priors", according to which your comment is false. If you want to use some other definition, please state it.
Why would you think we are not using it in the standard sense? "Discrimination is the prejudicial and/or distinguishing treatment of an individual based on their actual or perceived membership in a certain group or category"
By that reasoning, refusing to hire someone who doesn't have good recommendations, is discrimination, because you're giving him distinguishing treatment (refusing to hire him) based on membership in a category (people who lack good recommendations).
I think you have some assumptions that you need to make explicit, after thinking them through first. (For instance, one obvious change is to replace "category" with "irrelevant category", but that won't work.)
First sorry for the typo.
Claiming 4 degrees of accuracy means, claiming that the factor of uncertainity about the difference is off by a factor of more than ten.
Understanding the uncertainity that exist in vital for reasoning effectively about what's true.
Different people have different goals. If your goal is the search for truth than it matters greatly whether what you speaking is true.
If your goal is to spread memes that produce social change than it makes sense to use different criteria.
What does discrimination mean? If a job application with a name that common with black people gets rejected while an identical one with a name that's common with white people gets accepted that would be an example of bad discrimination.
Does it matter if having said name is in fact correlated with job performance?
Only if it's still correlated when you control for anything else on the CV and cover letter, incl. the fact that the candidate is not currently employed by anyone else.
Being correlated isn't very valuable in itself. Even if you do believe that blacks on average have a lower IQ, scores on standardized test tell you a lot more about someone IQ.
The question would be whether the name is a better predictor of job performance than grades to distinguish people in the population of people who apply or whether the information that comes from the names adds additional predictive value.
But even if various proxies of social status would perform as predictors I still value high social mobility. Policies that increase it might not be in the interest of the particular employeer but of interest to society as a whole.
Emphasis mine. I don't think this is the question at all, because you also have the grade information; the only question is if grades screen off evidence from names, which is your second option. It seems to me that the odds that the name provides no additional information are very low.
To the best of my knowledge, no studies have been done which submit applications where the obviously black names have higher qualifications in an attempt to determine how many GPA points an obviously black name costs an applicant. (Such an experiment seems much more difficult to carry out, and doesn't have the same media appeal.)
So, this "only question" formulation is a little awkward and I'm not really sure what it means. For my part I endorse correctly using (grades + name) as evidence, and I doubt that doing so is at all common when it comes to socially marked names... that is, I expect that most people evaluate each source of information in isolation, failing to consider to what extent they actually overlap (aka, screen one another off).
ChristianKI brought up the proposition "(name)>(grades)" where > means that the prediction accuracy is higher, but the truth or falsity of that proposition is irrelevant to whether or not it's epistemically legitimate to include name in a decision, which is determined by "(name+grades)>(grades)".
Doing things correctly is, in general, uncommon. But the shift implied by moving from 'current' to 'correct' is not always obvious. For example, both nonsmokers and smokers overestimate the health costs of smoking, which suggests that if their estimates became more accurate, we might see more smokers, not less. It's possible that hiring departments are actually less biased against people with obviously black names than they should be.
It's even possible that if the costs of smoking are overestimated, more people should be smoking-- part of the campaign against smoking is to underestimate the pleasures and social benefits of smoking.
...insofar as their current and future estimates of health costs are well calibrated with their actual smoking behavior, at least. Sure.
Well, it's odd to use "bias" to describe using observations as evidence in ways that reliably allow more accurate predictions, but leaving the language aside, yes, I agree that it's possible that hiring departments are not weighting names as much as they should be for maximum accuracy in isolation... in other words, that names are more reliable evidence than they are given credit for being.
That said, if I'm right that there is a significant overlap between the actual information provided by grades and by names, then evaluating each source of information in isolation without considering the overlap is nevertheless a significant error.
Now, it might be that the evidential weight of names is so great that the error due to not granting it enough weight overshadows the error due to double-counting, and it may be that the signs are such that double-counting leads to more accurate results than not double-couting. Here again, I agree that this is possible.
But even if that's true, continuing to erroneously double-count in the hopes that our errors keep cancelling each other out isn't as reliable a long-term strategy as starting to correctly use all the evidence we have.
That in no way implies that it would be a good choice for people to smoke more. People don't make those decisions through rational analysis.
If you combine a low noise signal with a high noise signal the combined signal can be of medium noise. Combining information isn't always useful if you want to use both signal as proxy for the same thing.
For combining information in such a way you would have to believe that the average black with a IQ of 120 will get a higher GPA score than the average white person of the same IQ.
I think there little reason to believe that's true.
Without actually running a factor analysis on the outcomes of hiring decision it will be very difficult to know in which direction it would correct the decision.
Even if you do run factor analysis integrating addtional variables costs you degrees of freedom so it not always a good choice to integrate as much variables as possible in your model. Simple models often outperform more complicated ones.
Human's are also not good at combining multiple sources of information.
Agreed that if you have P(A|B) and P(A|C), then you don't have enough to get P(A|BC).
But if you have the right objects and they're well-calibrated, then adding in a new measurement always improves your estimate. (You might not be sure that they're well-calibrated, in which case it might make sense to not include them, and that can obviously include trying to estimate P(A|BC) from P(A|C) and P(A|B).)
Not quite. Regression to the mean implies that you should apply shrinkage which is as specific as possible, but this shrinkage should obviously be applied to all applicants. (Regressing black scores to the mean, and not regressing white scores, for example, is obviously epistemic malfeasance, but regressing black scores to the black mean and white scores to the white mean makes sense, even if the IQ-grades relationship is the same for blacks and whites.)
It could also be that the GPA-job performance link is different for whites and blacks, even if the IQ-GPA link is the same for whites and blacks. (And, of course, race could impact job performance directly, but it seems likely the effects should be indirect for almost all jobs.)
If you're just comparing GPAs, rather than GPAs weighted by course difficulty, there could be a systematic difference in the difficulty of classes that applicants take by race. I've had a hard time getting numerical data on this, for obvious reasons, but there are rumors that some institutions may have a grade bias in favor of blacks. (Obviously, you can't fit a parameter to a rumor, but this is reason to not discount an effect that you do see in your data.)
Yes, but... motivated cognition alert. If you're building models correctly, you take this into account by default, and so there's no point in bringing it up for any particular input because you should already be checking it for every input.
Could you explain your reasoning here?
IQ is a strong predictor of academic performance, and a 1.5 sd gap is a fairly significant difference. The only thing I could think of to counterbalance it so that the average white would get a higher GPA would be through fairly severe racial biases in grading policies in their favor, which seems at odds with the legally-enforced racial biases in admissions / graduation operating in the opposite direction. Not to mention that black African immigrants, legal ones anyway, seem to be the prototype of high-IQ blacks who outperform average whites.
I am a little puzzled by the claim, which leads me to believe I've misunderstood you somehow or overlooked something fairly important.
I missed the qualification of speaking of whites with the same IQ. I added it via an edit.
Source is here. SD for Asians and Europeans is 35, SD for Africans was 85. N=20,000.
...no? Why in the world would he present error bars? The numbers are in line with other studies, without massive uncertainty, and irrelevant to his actual, stated and quoted, point.
His stated point is about telling things that everybody is supposed to know.
If you have an SD of 35 for an average of 1362 you have no idea about whether the last digit should be a 2. That means either you do state an error interval or you round to 1360.
Human height changed quite a bit over the last century. http://www.voxeu.org/article/reaching-new-heights-how-have-europeans-grown-so-tall . Taking data about human brainsize with 4 digit accuracy and assuming that it hasn't changed over the last 30 years is wrong.
European gained a lot of bodymass over the last 100 years due to better nutrition. The claim that it's static at 4 digit in a way where you could use 30 year old data to describes todays situation, gives the impression that human brainsize is something with is relatively fixed.
The difference in brain size between Africans and European in brainsize in that study is roughly the difference in height between todays Europeans and Europeans 100 years ago.
Given that background taking a three decades old average from one sample population and claiming that it's with 4 digits accuracy the average that exist today is wrong.
If individual datapoints have an SD of 35, and you have 20000 datapoints, then the SD of studies like this is 35/sqrt(20000)≈0.24. So giving a one's digit for the average is perfectly reasonable.
According to the paper the total mean brain size for males is 1,427 while for females it's 1,272. Given around half women and half men the SD per point should be higher than 35.
No, that was absolutely not his point. I don't understand how you could have come away thinking that- literally the entire next paragraph directly stated the exact opposite:
More generally, that was not a tightly reasoned book/paper about brainsize. That line was a throwaway point in support of a minor example ("For example, average brain size is not the same in all human populations") on a short blog post. Arguments about the number of significant figures presented, when you don't even disagree about the overall example or the conclusion, are about as good an example of bad disagreement as I can imagine.
SMBC comics on the relative proximity of excretory and reproductive outlets in humans.
Evo-devo (that is to say, actual real science) gives an even better account of that accident of evolutionary history. For simple sessile animals, reproduction often involves dumping quantities of spores or gametes into the environment. And what other system already dumps quantities of stuff into the environment...?
Okay, but why should the reproductive outlets be there too?
I agree connotationally, but the comic only answers half of the question.
I am a fan of SMBC, but the entire explanation is wrong. The events that led to the integration of reproductive and digestive systems happened long before a terrestrial existence of vertebrates, and certainly long before hands. To get a start on a real explanation you have to go back to early bilaterals:
http://www.leeds.ac.uk/chb/lectures/anatomy9.html
As near as I can tell it was about pipe reuse. But you can't make a funny comic about that (or maybe you can?). Zach is a "bard", not a "wizard." He entertains.
Natural selection also led us to breathe and eat through the same hole. Seriously???? This causes so many problems. Well, not enough problems for natural selection to change it, I guess.
Having two (three, technically) holes you can breath through has its advantages. Ever had a nasty head cold that clogs your sinuses so bad you can't breathe?
You still have just one pharynx, though.
Being able to smell what you're chewing is a huge advantage. I suppose achieving that some other way could get pretty convoluted.
" a morally blind, fickle, and tightly shackled tinkerer" (1) who "should be in jail for child abuse and murder"(2)
(1) POWELL, Russell & BUCHANAN, Allen. "Breaking evolution's chains: the prospect of deliberate genetic modification in humans." In: SAVULESCU, J. & MEULEN, Rudd ter (orgs.) “Enhancing Human Capacities”. Wiley-Blackwell. 2011.
(2) BOSTROM, Nick. “In defense of posthuman dignity.” Bioethics, v. 19, n. 3, p. 202-214, 2005.
There is no escape from evolution (variation and selection).
Deliberate genetic selection is just more complicated evolution.
Sure there is. Organisms could, in theory, create perfect replicas without variation for selection to act on. Contrariwise, they could create new organisms depending on what they needed that would bear no relation to themselves and would not reproduce in kind (or at all).
If I could write an AI, the last thing I'd want is to make it reproduce with random variations. If I could genetically engineer myself or my children, I'd want to introduce deliberate changes and eliminate random ones. (Apart from some temporary exceptions like the random element in our current immune systems.)
I think you're overusing the term "evolution". If you let it include any kind of variation (deliberate design) and any kind of selection (deliberate intelligent selection), you can't make any predictions that would hold for all "evolving" systems.
In which theory? I don't think this is true if temperatures are above absolute zero, for example.
I suspect that you're being too restrictive- it doesn't seem like variation has to be blind, and selection done by replication, for 'evolution' to be meaningful. Now, blind biological evolution and engineering design evolution will look different, but it seems reasonable to see an underlying connection between them.
Skitter the bug girl on morality, consequentialism and metaethics in Worm, the online serial recommended by Eliezer for HPMoR withdrawal symptoms.
We're also biased toward believing we're in one of those circumstances when we're not.
Yep, and the part after the quote alludes to that.
James A. Donald
Exact same argument. Does it sound equally persuasive to you?
I'd extend Eugene's reply and point out that both the original and modified version of the sentence are observations. As such, it doesn't matter that the two sentences are grammatically similar; it's entirely possible that one is observed and the other is not. History has plenty of examples of people who are willing to do harm for a good cause and end up just doing harm; history does not have plenty of examples of people who are willing to cut people open to remove cancer and end up just cutting people open.
Also, the phrasing "to end malaria" isn't analogous to "to remove cancer" because while the surgery only has a certain probability of working, the uncertainty in that probability is limited. We know the risks of surgery, we know how well surgery works to treat cancer, and so we can weigh those probabilities. When ending malaria (in this example), the claim that the experiment has so-and-so chance of ending malaria involves a lot more human judgment than the claim that surgery has so-and-so chance of removing cancer.
Yes, but keep in mind the danger of availability bias; when people are willing to do harm for a good cause, and end up doing more good than harm, we're not so likely to hear about it. Knut Haukelid and his partners caused the death of eighteen civilians, and may thereby have saved several orders of magnitude more. How many people have heard of him? But failed acts of pragmatism become scandals.
Also, some people (such as Hitler and Stalin) are conventionally held up as examples of the evils of believing that ends justify means, but in fact disavowed utilitarianism just as strongly as their critics. To quote Yvain on the subject, "If we're going to play the "pretend historical figures were utilitarian" game, it's unfair to only apply it to the historical figures whose policies ended in disaster."
We already have a situation where we can cause harm to innocent people for the general good. It's called taxes.
Since I got modded down for that before, here's a hopefully less controversial example: the penal system. If you decide that your society is going to have a penal system, you know (since the system isn't perfect) that your system will inevitably punish innocent people. You can try to take measures to reduce that, but there's no way you can eliminate it. Nobody would say we shouldn't put a penal system into effect because it is wrong to harm innocent people for the greater good--even though harming innocent people for the greater good is exactly what it will do.
I don't think anyone really objects to hurting innocent people for the greater good. The kind of scenarios that most people object to have other characteristics than just that and it may be worth figuring out what those are and why.
It seems to me that utilitarianism decides how to act based on what course of action benefits people the most; deciding who counts as people is not itself utilitarian or non-utilitarian.
And even ignoring that, Hitler and Stalin may be valuable as examples because they don't resemble strict utilitarianism, but they do resemble utilitarianism as done by fallible humans. Actual humans who claim that the ends justify the means also try to downplay exactly how bad the end is, and their methods of downplaying that do resemble ideas of Hitler and Stalin.
Can you provide examples of this? In my experience, while utilitarianism done by fallible humans may be less desirable than utilitarianism as performed by ideal rationalists, the worst failures of judgment on an "ends justify the means" basis tend not to come from people actually proposing policies on a utilitarian basis, but from people who were not utilitarians whose policies are later held up as examples of what utilitarians would do, or from people who are not utilitarians proposing hypotheticals of their own as what policies utilitarianism would lead to.
Non utilitarians in my experience generally point to dangers of a hypothetical "utilitarianism as implemented by someone much dumber or more discriminatory than I am," which is why for example in Yvain's Consequentialism FAQ, the objections he answered tended to be from people believing that utilitarians would engage in actions that those posing the objections could see would lead to bad consequences.
Utilitarianism as practiced by fallible humans would certainly have its failings, but there are also points of policy where it probably offers some very substantial benefits relative to our current norms, and it's disingenuous to focus only on the negative or pretend that humans are dumber than they actually are when it comes to making utilitarian judgments.
Another way that a penal system is extremely likely to harm innocents is that the imprisoned person may have been supplying a net benefit to their associates in non-criminal ways, and they can't continue to supply those benefits while in prison. This is especially likely for some of the children of prisoners, even if the prisoners were guilty..
I am unsure how to map decisions under uncertainty to evidence about values as you do here.
A still-less-controversial illustration: I am shown two envelopes, and I have very high confidence that there's a $100 bill in exactly one of those envelopes. I am offered the chance to pay $10 for one of those envelopes, chosen at random; I estimate the EV of that chance at $50, so I buy it. I am then (before "my" envelope is chosen) offered the chance to pay another $10 for the other envelope, this chance to be revoked once the first envelope is selected. For similar reasons I buy that too.
I am now extremely confident that I've spent $10 for an empty envelope... and I endorse that choice even under reflection. But it seems ridiculous to conclude from this that I endorse spending $10 for an empty envelope. Something like that is true, yes, but whatever it is needs to be stated much more precisely to avoid being actively deceptive.
It seems to me that if I punish a hundred people who have been convicted of a crime, even though I'm confident that at least some of those people are innocent, I'm in a somewhat analogous situation to paying $10 for an empty envelope... and concluding that I endorse punishing innocent people seems equally ridiculous. Something like that is true, yes, but whatever it is needs to be stated much more precisely to avoid being actively deceptive.
In your example, you are presenting "I think you should spend $10 for an empty envelope" as a separate activity, and you are being misleading because you are not putting it into context and saying "I think you should spend $10 for an empty envelope, if this means you can get a full one".
With the justice system example, I am presenting the example in context--that is, I am not just saying "I think you should harm innocent people", I am saying "I think you should harm innocent people, if other people are helped more". It's the in-context version of the statement that I am presenting, not the out-of-context version.
I (and James Donald) agree. Remember that the traditional ethical laws this is based on also have traditional exceptions, e.g., for punishment and war, and additional laws governing when and how those exceptions apply. The thing to remember is that you are not allowed to add to the list of exceptions as you see fit, nor are you allowed to play semantic games to expand them. In particular, no "war on poverty", or "war on cancer", even "war on terror" is pushing it.
I think you're misunderstanding me. We all know that most ethical systems think it's okay to punish criminals. I'm not referring to the fact that criminals are punished, but the fact that when we try to punish criminals we will, since no system is perfect, inevitably end up punishing some innocent people as well. Those people did nothing wrong, yet we are hurting them, and for the greater good.
This is no different from the fact that it's okay to fly planes even though some of them will inevitably crash.
Note that if a judge punishes someone who turns out to be innocent, we believe he should feel guilty about this rather then simply shrugging and saying "mistakes will happen". Similarly, if an engeneer makes a mistake than causes a plane to crash.
Just like not all people punished are guilty, not all innocent people punished are discovered; there's always going to be a certain residue of innocent people who are punished, but not discovered, with no guilty judges or anything else to make up for it. Hurting such innocent people is nevertheless an accepted part of having a penal system.
The second sentence is an empirical observation that is clearly false in your example.
Utilitarianism isn't a description of human moral processing, it's a proposal for how to improve it.
One problem is that if we, say, start admiring people for acting in "more utilitarian" ways, what we may actually be selecting for is psychopathy.
Agreed. Squicky dilemmas designed to showcase utilitarianism are not generally found in real life (as far as I know). And a human probably couldn't be trusted to make a sound judgement call even if one were found. Running on untrusted hardware and such.
Ah- and this is the point of the quote. Oh, I like that.
Our nature is not purely utilitarian, but I wouldn't go so far as to say that utilitarianism is not in our nature. There are things we avoid doing regardless of how they advance our goals, but most of what we do is to accomplish goals. If you can't understand that there are things you need to do to eat, then you won't eat.
Strawman. Does any moral system anyone's ever proposed say we should never attempt to accomplish goals?
I agree that utilitarianism is "not in our nature," but what has this to do with rationality?
Utilitarianism is pretty fundamental around here. Not everyone here agrees with it, but pretty much all ethical discussions here take it as a precondition for even having a discussion. The assertion that we are not, cannot be, and never will be utilitarians is therefore very relevant.
If you are suggesting by that emphasis on "nature" that we might act to change our nature and remake ourselves into better utilitarians, I would ask, if we are in fact not utilitarians, why should we make ourselves so? Infatuation with the tidiness of the VNM theorem?
We us::should try to be as utilitarian as we can because our intuitive morality is kind of consequentialist, so we care about how the world actually ends up, and utilitarianism helps us win.
If we ever pass up a chance to literally hold one child's face to a fire and end malaria, we have screwed up. We are not getting what we care about most.
It's not the "tidiness" in any aesthetic sense of VNM axioms that are important, it's the not-getting-money-pumped. Not being able to be money pumped is important not because getting money pumped is stupid and we can't be stupid, but because we need to use our money on useful stuff.
In another comment James A. Donald suggests a way torturing children could actually help cure malaria:
Would you be willing to endorse this proposal? If not, why not?
If I'm not fighting the hypothetical, yes I would.
If I encountered someone claiming that in the messy real world, then I run the numbers VERY careful and most likely conclude the probability is infinitesimal of him actually telling the truth and being sane. Specifically, of those claims the one that it'd be easier to kidnap someone than to find volunteer (say, adult willing to do it in exchange for giving their families large sums of money) sounds highly implausible.
What's your opinion of doing it Tuskegee-style, rather than kidnapping them or getting volunteers? (One could believe that there might be a systematic difference between people who volunteer and the general population, for example.)
That already had a treatment, hence it was not going to save the millions suffering, since they were already saved. Also, those scientist didn't have good enough methodology to have gotten anything useful out of it in either case. There's a general air of incompetence surrounding the whole thing that worries me more than the morality.
As I said; before doing anything like this you have to run your numbers VERY carefully. The probability of any given study solving a disease on it's own is extremely small, and there are all sorts of other practical problems. That's the thing; utilitarianism is correct, and not answering according to it is fighting the hypothetical. but in cases like this perhaps you should fight the hypothetical, since you're using specific historical examples that very clearly did NOT have positive utility and did NOT run the numbers.
It's a fact that a specific type of utilitarianism is the only thing that makes sense if you know the math. It's also a fact that there are many if's and buts that make human non-utilitarian moral intuition an heuristic way more reliable for actually achieving the greatest utility than trying to run the numbers yourself in the vast majority of real world cases. Finally, it's a fact that most things done in the name of ANY moral system is actually bullshit excuses.
http://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment
What do you think of that utilitarian calculation? I'm not sure what I think of it.
It seems like either (1) Rivers was deceived, or (2) she was in some other way unaware that there was already an effective cure for syphilis which was not going to be given to the experimental subjects, or (3) the other options available to these people were so wretched that they were worse than having syphilis left untreated.
In cases 1 and 2, it doesn't really matter what we think of her calculations; if you're fed sufficiently wrong information then correct algorithms can lead you to terrible decisions. In case 3, maybe Rivers really didn't have anything better to do -- but only because other circumstances left the victims of this thing in an extraordinarily terrible position to begin with. (In much the same way as sawing off your own healthy left arm can be the best thing to do -- if someone is pointing a gun at your head and will definitely kill you if you don't. That doesn't say much about the merits of self-amputation in less ridiculous situations.)
I find #3 very implausible, for what it's worth.
(Now, if the statement were that Rivers believed that the benefits to the community outweighed the risks, and indeed the overt harm, to the subjects of the experiment, that would be more directly to the point. But that's not what the article says.)
In general, given ethical norms as they currently exist, rather than in a hypothetical universe where everyone is a strict utilitarian, I think the expected returns on such an experiment are unlikely to be worth the reputational costs.
The Tuskegee experiment may have produced some useful data, but it certainly didn't produce returns on the scale of reducing global syphilis incidence to zero. Likewise, even extensive experimentation on abducted children is unlikely to do so for malaria. The Tuskegee experiment though, is still seen as a black mark on the reputation of medical researchers and the government; I've encountered people who, having heard of it, genuinely believed that it, rather than the extremely stringent standards that currently exist for publishable studies, was a more accurate description of the behavior of present researchers. That sort of thing isn't easy to escape.
Any effective utilitarian must account for the fact that we're operating in a world which is extremely unforgiving of behavior such as cutting up a healthy hospital visitor to save several in need of organ transplants, and condition their behavior on that knowledge.
Here's one with actual information gained: Imperial Japanese experimentation about frostbite
The cost of this scientific breakthrough was borne by those seized for medical experiments. They were taken outside and left with exposed arms, periodically drenched with water, until a guard decided that frostbite had set in. Testimony From a Japanese officer said this was determined after the "frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck."
I don't get the impression that those experiments destroyed a lot of trust-- nothing compared to the rape of Nanking or Japanese treatment of American prisoners of war.
However, it might be worth noting that that sort of experimentation doesn't seem to happen to people who are affiliated with the scientists or the government.
Logically, people could volunteer for such experiments and get the same respect that soldiers do, but I don't know of any real-world examples.
It's hard for experiments to destroy trust when those doing the experiments aren't trusted anyway because they do other things that are as bad (and often on a larger scale).
Actual medical conspiracies, such as the Tuskegee syphilis experiment, probably contribute to public credence in medical conspiracy theories, such as anti-vax or HIV-AIDS denialism, which have a directly detrimental effect on public health.
Probably.
In a culture of ideal rationalists, you might be better off having a government run lottery where people were randomly selected for participation in medical experiments, with participation on selection being mandatory for any experiment, whatever its effects on the participants, and all experiments being vetted only if their expected returns were more valuable than any negative effect (including loss of time) imposed on the participants. But we're a species which is instinctively more afraid of sharks than stairs, so for human beings this probably isn't a good recipe for social harmony.
The question is not "would this be a net benefit" (and it probably would, as much as I cringe from it). The question is, are there no better options?
Such as? Experimenting on animals? That will probably cause progress to be slower and think about all the people who would die from malaria in the meantime.
Yes. How many more? Would experimenting on little girls actually help that much? Also consider that many people consider a child's life more valuable than an adult one, that even in a world where you would not have to kidnap girls and evade legal problems and deal with psychological costs on the scientists caring for little humans is significantly more expensive then caring for little mice, that said kidnapping, legal, and psychological costs do exist, that you could instead spend that money on mosquito nets and the like and save lives that way...
The answer is not obviously biased towards "experiment on little girls.". In fact, I'd say it's still biased towards "experiment on mice." Morality isn't like physics, the answer doesn't always add up to normality, but a whole lot of the time it does.
...
So your answer is that in fact it would not work. That is a reasonable response to an outrageous hypothetical. Yet James A. Donald suggested a realistic scenario, and beside it, the arguments you come up with look rather weak.
Given the millions killed by malaria and at most thousands of experimental subjects, it takes a heavy thumb on the scales of this argument to make the utilitarian calculation come out against.
This is a get-out-of-utilitarianism-free card. A real utilitarian simply chooses the action of maximum utility. He would only pay a psychological cost for not doing that. When all are utilitarians the laws will also be utilitarian, and an evaluation of utility will be the sole criterion applied by the courts.
You are not a utilitarian. Neither is anyone else. This is why there would be psychological costs and why there are legal obstacles. You feel obliged to pretend to be a utilitarian, so you justify your non-utilitarian repugnance by putting it into the utilitarian scales.
But not any more expensive than caring for chimpanzees. Where, of course, "care for" does not mean "care for", but means "keep sufficiently alive for experimental purposes".
This looks like motivated reasoning. The motivation, to not torture little children, is admirable. But it is misapplied.
Can you expand on what you see as the differences?
Of wait we're talking about an entire society thats utilitarian and rational. In that case I'm (coordinating with everyone else via auman agreement) just dedicating the entire global population to a monstrous machine for maximally efficient FAI research where 99% of people are suffering beyond comprehension with no regard for their own well being in order to support a few elite researchers as the dedicate literally every second of their lives to thinking at maximal efficiency while pumped full of nootropics that'll kill them in a few years.
Endorse? You mean, publicly, not on LessWrong, where doing so will get me much more than downvotes, and still have zero chance of making it actually happen? Of course not, but that has nothing to do with whether it's a good idea.
I meant "endorse" in the sense that, unlike the Milgram experiment, there is no authority figure to take responsibility on your behalf.
Do you think it's a good idea?
If it will actually work, and there's no significant (as in at least the size of malaria being cured faster), and bad, consequences we're missing, or there are significant bad consequences but they're balanced out by significant good consequences we're missing, then yes.
Vox Day
I upvoted this comment, but I want to add an important caveat. Whether, and how much, you trust your own judgment over that of an expert should depend at least in part on the degree to which you think your situation is unusual.
The IT guy wants you to shut up and go away, but (if in fact he is an expert and not a trained monkey reading a script) he's not going to spout random nonsense at you just to get you to leave. He's going to tell you things relevant to what is, in his experience, the usual situation.
Consider well whether you're sure your problem is some special snowflake. The IT guy has seen a lot of issues. Sometimes he can, before you finish your first sentence, know exactly what your problem is and how to fix it, and if he sounds bored when he tells you "just reboot it", that doesn't mean that he's wrong. If it costs you little, try his advice first.
The expert also is better equipped to discern whether a situation is unusual, because the expert has seen more.
To the non-expert, something really mysterious and weird must be going on to explain these puzzling symptoms. Computer A can ping computer B, but B can't ping A? That's so strange! After all, ping is supposed to test whether two computers can talk to each other on the network, right? How could it possibly work one way but not the other? Is something wrong with the switch? Is one of the network cards broken? Is it a virus?!
To the expert, that's not unusual at all. One computer has the wrong subnet mask set. Almost every time. Like, that's 20 to 100 times more likely than a hardware problem or something broken in the network infrastructure, and it can be checked in seconds. And while the machine may have a virus too, that's not what causes these symptoms.
Very true as well, though I will add the counter-caveat that the expert is usually biased toward concluding that your situation is not unusual. This is why many "tech support horror stories" have a bit where the narrator goes "... and then, when they finally got it through their heads that yes, I had tried restarting it five times, and no, I didn't have the wrong settings ..."
I suspect there are a couple of things going on there.
One, it's important to distinguish consulting an expert from consulting a tech support script. Most of the time when you call up tech support, you're talking to a human being, but not an expert. You're talking to a person whose job it is to execute a script in order to relieve the experts from dealing with the common cases.
(And yes, it's in the interest of a consumer tech-support department to spend as little money on expensive experts as they can get away with — which is why when a Windows box has gotten laggy, they say "reboot it" and not "pop open the task manager and see what's using 100% of your CPU". They don't want to diagnose the long-term problem (your Scrabble game that you left running in the background has a bug that makes it busy-wait if it's back there for 26 hours); they want to make your computer work now and get you off the line. That's a different case from, for instance, an institutional IT department (at, say, a university) that has to maintain a passable reputation with the faculty who actually care about getting their research done.)
Two, there's narrative bias. The much-more-numerous cases where the simple fix works don't make for good "horror stories", so you don't hear them retold. Especially the ones where the poor user is now embarrassed because they have to admit they were outguessed by a tech-support script after giving the support tech a hard time.
(Yeah, I like good tech support too; that's part of why I use the local awesome option (Sonic.net) for my ISP instead of Comcast. I can call them up and talk to someone who actually knows what ARP means. But sometimes the problem does go away for months when you power-cycle the damn modem.)
Well, we don't know that they're actually biased in this direction until we know how their assessment of the probability that the usual thing is going on compares to the actual probability that the usual thing is going on.
Yes, there are plenty of "tech support horror stories" where the consultant has a hard time catching on to the fact that the complainant is not dealing with a usual or trivial problem, but for every one of those, there tends to be a slew of horror stories from the other end, of people getting completely wound up over something that the consultant can solve trivially, and failing to follow the simple advice needed to do so.
The consultants could be very well calibrated, and still occasionally be dramatically wrong. Beware availability bias.
This brings up another related problem, namely how often supposed "experts" actually aren't.
snip
Dupe.
GK Chesterton
I don't really like quotes like this. It's not that it's not true and it's not that it's not that no one commits the error it warns against.
It's that no one who is blind to fallacies due to popularity is going to notice their mistake and change - it's too easy to agree with the quote without firing up the process that would lead you to making the mistake.
Good quotes will make it easy to put yourself in either position so that you can mentally bridge the two. If you're thinking "I can't imagine how they might make that mistake!", then you won't recognize that thought process when you go through it yourself.
--Paul
Found here.
Eric Raymond
I don't get it.
Eric Raymond
Google Is My Friend.
Nassim Taleb
What's the difference between "based on computation of the odds" and "based on some model"?
Taleb is doing some handwaving here.
"Some model" in this context is just the assumption of a specific probability distribution. So if, for example, you believe that the observation values are normally distributed with the mean of 0 and the standard deviation of 1, the chance of seeing a value greater than 3 (a "three-sigma value") is 0.13%. The chance of seeing a value greater than 6 (a "six-sigma value") is 9.87e-10. E.g. if your observations are financial daily returns, you effectively should never ever see a six-sigma value. The issue is that in practice you do see such values, pretty often, too.
The problem with Taleb's statement is that to estimate the probabilities of seeing certain values in the future necessarily requires some model, even if implicit. Without one you can not do the "computation of the odds" unless you are happy with the conclusion that the probability to see a value you've never seen before is zero.
Taleb's criticism of the default assumption of normality in much of financial analysis is well-founded. But when he starts to rail against models and assumptions in general, he's being silly.
So, this.
Hmm. But, if you multiply "once in every ten thousand years" by all the different kinds of things that could be said to happen once every ten thousand years, don't you get something closer to "many times a day"?
The computation is not relevant, because when you make a prediction that, say, some excursion in the stock market will happen only once in ten thousand years, you are making a prediction about that specific thing, not ten thousand things. It will be a thing you have never seen, because if you had seen it happen, you could not claim it would only happen once in ten thousand years—the observation would be a refutation of that claim. Since you have not seen it, you are deriving it from a theory, and moreover a theory applied at an extreme it has never been tested at. For such a prediction to be reliable, you need to know that your theory actually grasps the basic mechanism of the phenomenon, so that the observations that you have been able to make justify placing confidence in its extremes. This is a very high bar to reach. Here are a few examples of theories where extremes turned out to differ from reality:
Newtonian gravity --> precession of Mercury
Ideal gas laws --> non-ideal gases
Daltonian atomic theory --> multiple isotopes of the same element
The computation is directly relevant, given that Taleb is talking about how often he sees "should only happen every N years" in newspapers and faculty news. Doesn't he realise how many things newspapers report on? Astronomy faculties are pretty good for this too, since they watch ridiculous numbers of stars at once.
You can't just ignore the multiple comparisons problem by saying you're only making a prediction about "one specific thing". What about all the other predictions about the stock market you made, that you didn't notice because they turned out to be boringly correct?
Intuition pump: my theory says that the sequence of coinflips HHHTHHTHTT-THHTHHHTT-TTHTHTTTTH-HTTTHTHHHTT, which I just observed, should happen about once every 7 million years.
Intuition pump: if I choose an interesting sequence of coinflips in advance, I will never see it actually happen if the coinflips are honest. There aren't enough interesting sequences of 40 coinflips to ever see one. Most of them look completely random, and in terms of Kolmogorov complexity, most of them are: they cannot be described much more compactly than by just writing them out.
Now, we have a good enough understanding of the dynamics of tossed coins to be fairly confident that only deliberate artifice would produce a sequence of, say, 40 consecutive heads. We do not have such an understanding of the sort of things that appear in the news as "should only happen every N years".
Feynman on the same theme.
Every sequence of 40 coin flips is interesting. Proof: Make a 1 to 1 relation on the sequence of 40 coin flips and a subset of the natural numbers, by making H=1 and T=0 and reading the sequence as a binary representation. Proceed by showing that every natural number is interesting.
-Tyrion Lannister, Game of Thrones
I'm always eager to upvote a Game of Thrones quote, but unfortunately I don't see the rationality insight here beyond an ordinary quid pro quo.
Tyrion is frequently put into situations where he relies on his family's reputation for paying debts.
It's a real-life Newcomb-like problem - specifically a case of Parfit's Hitchhiker - illustrating the practical benefits of being seen as the sort of agent who keeps promises. It's not an ordinary quid-pro-quo because there is, in fact, no incentive for Tyrion to keep his end of the bargain once he gets what he wants other than to be seen as the sort of person who keeps his bargain.
Think it's a stretch?
Ahem.
Er...right. Realistic, I should have said!
We often construct such ridiculous scenarios to illustrate this sort of thing ..."You're in a desert and a selfish pseudo-psychic drives by"? Really?
I enjoyed the fact that Parfit's Hitchhiker came up as a pop-culture reference, in a situation that arose organically.
The point of these scenarios is make the issue as "clean" as possible, to strip away all the unnecessary embellishments which usually only cause people to fight the hypothetical.
I guess what's inside the screenwriter's skull is organic... :-)
But really, since the invention of writing pretty much every writer who addressed the issue pointed out the importance of one's reputation of keeping promises. There are outright commands (e.g. Numbers 30:2 If a man ... swears an oath to bind himself by a pledge, he shall not break his word. He shall do according to all that proceeds out of his mouth.) and innumerable stories and fables about good things which happen to those who keep their promises and bad things which happen to those who don't.
I don't disagree with what you say, but I do disagree with the connotation that things which are not original or counter intuitive are not worth pointing out.
The last time this show was quoted, it basically amounted to "try hard to win, give it everything", which is also something that people have been saying since the beginning of writing. All quote threads are filled with things that have been said again and again in slightly different ways. Even outside of quote threads, it's worth rephrasing things. Pretty much every Lesswrong post has been conceptually written before by someone, with a few rare exceptions.
Yes, but usually it's a punishment or reward issued directly from the other party, or by forces of nature...not about the practical value of going out of your way to establish reputation.
Something about the opposite of Parfit's hitchhiker? Developing a reputation for following through on promises one could renege on.
-- Bas van Fraassen, Laws and Symmetry
Publilius Syrus
I'm not sure that's true in general. I can think of situations where the prudent course of action is to act as fast as possible. For instance, if you accidentally set yourself on fire on the cooker, if you are acting prudently, you will stop, drop and roll, and do it hastily.
The more I look at this, the less sure I am what "hastily" means.
More precisely... if I understand "hastily" to mean, roughly, "more rapidly/sloppily than prudence dictates", then this statement is trivially true. If I assume the statement is nontrivial, I'm not sure how to test whether something is being done hastily.
Trivial statements are often useful as reminders of facts, particularly when those facts are tradeoffs we would rather not have to face.
Harrap's First Law
"The enemy of my enemy has their own relationship with me."
Maxim 29
Guilded Age
Edit: mispelling of "write" corrected.
Write, not right.
Sorry if you feel this is nitpicky; it broke up my concentration.