Rationality Quotes August 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (733)
St. Francis of Assisi (allegedly)
Robert Wright, The Moral Animal
Josh Billings
(h/t Robin Hanson)
Alfred, Lord Tennyson, Ulysses
More of an anti-death quote, but:
"“Must I accept the barren Gift?
-learn death, and lose my Mastery?
Then let them know whose blood and breath
will take the Gift and set them free:
whose is the voice and whose the mind
to set at naught the well-sung Game-
when finned Finality arrives
and calls me by my secret Name.
Not old enough to love as yet,
but old enough to die, indeed-
-the death-fear bites my throat and heart,
fanged cousin to the Pale One's breed.
But past the fear lies life for all-
perhaps for me: and, past my dread,
past loss of Mastery and life,
the Sea shall yet give up Her dead!
.....
So rage, proud Power! Fail again,
and see my blood teach Death to die!”
-- The Silent Lord, Deep Wizardry, Diane Duane
--Daniel Dennet Consciousness Explained
David Chapman
See also: "Figuring out what should be your top priority" vs. "Actually working on your current best guess".
The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstract away its essence.
Fred P. Brooks, No Silver Bullet
This is true, but the connotations need to be applied cautiously. Complexity is necessary, but it is still something to be minimised wherever practical. Things should be as simple as possible but not simpler.
More concretely, sometimes software can be simplified and improved at the same time.
This isn't necessarily true if the complexity is very intuitive. If it takes ten thousand lines of code to accurately describe the action "jump three feet in the air", then those ten thousand lines of code are describing what a jump is, what to do while in mid-air, what it means to land, and other things that humans may grasp intuitively (assuming that the actor is constructed in a manner similar to a human).
Additionally, there are some complex features which are not specific to the software. We don't need to describe how a particular program receives feedback from the motor and sensors, how it translates the input of its devices, if these features are common to most similar programs - the description of those processes is part of the default, part of the background that we assume along with everything else we don't need to derive from fundamental physics.
In other words, the complexity of software may correspond to a feature which humans may be able to understand as simple - because we have the prior knowledge necessary, courtesy of common nature and nurture. A full description of complexity is necessary if and only if it is surprising to our intuition.
That is, in some sense, his point - a phrase like "jump three feet in the air" does abstract most of the computational essence, making it seem like a trivial problem what it really, really isn't.
I've always had misgivings about this quote. In my experience about 90% of the code on a large project is an artifact of a poor requirement analysis/architecture/design/implementation. (Sendmail comes to mind.) I have seen 10,000-line packages melting away when a feature is redesigned with more functionality and improved reliability and maintainability.
A luxury, once sampled, becomes a necessity. Pace yourself.
Andrew Tobias, My Vast Fortune
When a concept is inherently approximate, it is a waste of time to try to give it a precise definition.
-- John McCarthy
Thus, whenever you look in a computer science textbook for an algorithm which only gives approximate results, you will find that the algorithm itself is very vaguely specified, since the result is just an approximation anyway.
(I would have said: "When a concept is inherently fuzzy, it is a waste of time to give it a definition with a sharp membership boundary.")
Thus we merely require citizens to "be responsible adults" before they can vote rather than give a sharp boundary such as 18 years old, college applications tell you "don't write a long, rambling essay" rather than enforce a 500-word limit, and food packaging specifies "sometime in September" for the expiration date.
Sharp membership boundaries are useful to make it easy to test for the concept. Even if the concept is fuzzy and the test is imperfect, this doesn't need to be a waste of time.
Though sometimes it's even more useful to acknowledge that the sharp-boundaried concept we're testing for is different from, though perhaps expected to be correlated with in some way, the fuzzy concept we were initially interested in.
That helps us avoid the trap of believing that 17-year-olds aren't responsible adults but 18-year-olds are, or that 550-word essays are long and rambling but 450-word essays aren't, or that food is safe to eat on September 25 but not on September 29. None of that is true, but that's OK; we aren't actually testing for whether voters are responsible adults, essays are long and rambling, or food is expired.
Just because humans do it doesn't mean it's a good idea.
To clarify, I also think all of these are good ideas; not necessarily the best possible, but definitely useful.
It doesn't prove it's a good idea, but it's evidence in its favour.
Well, sure. But that doesn't mean it's very strong evidence: I'd expect to see an average human (or nation) do something stupid almost as often as they do something intelligent.
We are obviously starting from very different premises. To me, the fact that lots of people do something is very strong evidence that the behaviour is, at least, not maladaptive, and the burden of proof is very much on the person suggesting that it is. And the more widespread the behaviour, the stronger the burden.
Alternatively, you could just look at the evidence. When legal systems have replaced bright-line rules with 15-factor balancing tests, has that led to better outcomes for society as a whole? Consider in particular the criteria for the Rule of Law. In the mid-20th century, co-incident with high modernism and utilitarianism, these multi-part, multi-factor balancing tests were all the rage. Why are they now held in such disdain?
Unfortunately, the fact that lots of people do something may merely be an indication of a very successful meme: consider major religions.
I will certainly grant that having a sharp restriction is better than a 15-factor balancing test, but I'm not arguing for 15-factor balancing tests.
I'd go further, but I've just noticed that I don't really have much evidence for this belief, and I should probably go see how accomplished Chinese universities (which judge purely off the gaokao) are versus American universities first.
Sharp membership boundaries, however, often result in people forgetting the fuzziness of the concept - there are some people who vote without being responsible adults, because they can; an essay can be boring and rambling at 450 words or impressive and concise at 600; and food can be good a bit past its expiration date (it doesn't usually go in the other direction in my experience, presumably because the risk of eating spoiled food vastly outweighs the risk of mistakenly tossing out good food, so expiration dates are the very early estimates).
The opposite intellectual sin to wanting to derive everything from fundamental physics is holism which makes too much of the fact that everything is ultimately connected to everything else. Sure, but scientific progress is made by finding where the connections are weak enough to allow separate theories.
-- John McCarthy
Hazrat Inayat Khan
ibid.
But Naaman was wroth, and went away...And his servants came near, and spake unto him, and said, My father, if the prophet had bid thee do some great thing, wouldest thou not have done it? how much rather then, when he saith to thee, Wash, and be clean?
2 Kings 5: 11-13
Micah 6: 7-8
--multiple sutras
"In theory, there is no difference between theory and practice. In practice, there is."
Dupe.
Scott Adams
Aka http://demotivators.despair.com/demotivational/stupiditydemotivator.jpg
"Quitters never win, winners never quit, but those who never win AND never quit are idiots"
From the same website, another LessWrongian wisdom:
This is an incredibly important life skill.
Richard Hamming, The Art of Doing Science and Engineering (1997, PDF)
Le Bovier de Fontenelle
This explains all those urges I get to burn witches, my talent at farming, all my knowledge at hunting and tracking and my outstanding knack for feudal political intrigue.
(Composition is not the relationship to previous minds that education entails. Can someone think of a better one?)
We rest upon the frontal lobes of giants.
Derivation.
Much better.
Is that a praise of educated minds, or a caution against too readily classifying a mind as educated?
(Possibly related: http://lesswrong.com/lw/1ul/for_progress_to_be_by_accumulation_and_not_by/)
From the description of him on Wikipedia, I am certain it is the former, although the bone wedrifid picks with "composed" is symptomatic of where he falls short of his contemporary, Voltaire. He was a most refined, civilised, intelligent, and educated writer, very popular among the intellectual class, and achieved memberships of distinguished academic societies, but his strength, a great one indeed, was in writing well on what was already known, and he created little that was new. Voltaire's name lives to this day, but Fontenelle's, while important in his time, does not.
Scholarship is indeed a virtue, but Fontenelle's was not in service of a higher goal.
I read it as expressing the same view as The Neglected Virtue of Scholarship.
David Deutsch, The Beginning of Infinity
Hmm, this point seems more Kuhnian than Popperian. Maybe Deutsch got the two confused.
Another view.
Did Karl Popper populate his class with particularly unimaginative students ? If someone asked me to "observe", I'd fill an entire notebook with observations in less than an hour -- and that's even without getting up from my chair.
I'm pretty sure I had this very exercise in a creative-writing class somewhere in school.
And, while you were writing, someone would provide the wanted answer ;)
That's an interesting prediction. Have you tried it? Can you predict what you'd do after filling the notebook?
In my imagination, I'd probably wind up in one of two states:
I have never tried it myself in a structured setting, such as a classroom; but I do sometimes notice things, and then ask myself, "What is going on here ? Why does this thing behave in the way that it does ?". Sometimes I think about it for a while, figure out what sounds like a good answer, then go on with my day. Sometimes I shrug and forget about it. Sometimes -- very rarely -- I'm interested enough to launch a more thorough investigation. I imagine that if I set myself an actual goal to "observe" stuff, I'd notice a lot more stuff, and spend much more time on investigating it.
You say that, in such a situation, you could end up "feeling tricked", but this assumes that the teacher who told you to "observe" is being dishonest: he's not interested in your observations, he's just interested in pushing his favorite philosophy onto you. This may or may not be the case with Karl Popper, but observations are valuable (and, IMO, fun) regardless.
Ernest Rutherford
That sounds like a ridiculous thing to say and I can't really steelman it.
Do you have a reliable source for this quote? The Wikipedia talk page for the Rutherford article contains this exchange:
The quote itself, while still on the page, references this site which is an unsourced quote collection.
OK, maybe the quote isn't legit, but after all quite a lot of our favorite quotes are misquotations—that's not the point. It's an interesting thought even if no Nobel laureate ever said it. Is it ridiculous? It makes a lot of sense to me.
In addition to gwern's reply, if you read it as 10-to-1 to 12-to-1 odds, or even 1012-to-1 odds, and not 10^12-to-1 odds, then obviously there are lots of physical theories that deal with events that are less likely than 1/1012. And lots of experiments whose outcome people are more than 1012-to-1 sure about, and they are right to be so sure.
You quoted the most ridiculous figure, that of 10-to-1 or 12-to-1. I'm quite legitimately more than 12-to-1 sure about some things in physics, and I'm not even a physicist! The Wikipedia talk quote makes the point that all three possible quotes are to be found on the internet.
It's ridiculous if taken literally as a universal prior or bound, because it's very easy to contrive situations in which refusing to give probabilities below 1/10^12 lets you be dutch-booked or otherwise screw up - for example,
log2(10^12)is 40, so if I flip a fair coin 50 times, say, and ask you to bet on every possible sequence.... (Or simply consider how many operations your CPU does every minute, and consider being asked "what are the odds your CPU will screw up an operation this minute?" You would be in the strange situation of believing that your computer is doomed even as it continues to run fine.)But it's much more reasonable if you consider it as applying only to high-level theories or conclusions of long arguments which have not been highly mechanized; I discuss this in http://www.gwern.net/The%20Existential%20Risk%20of%20Mathematical%20Error and see particularly the link to "Probing the Improbable".
Yes, that's how I read it. Obviously it doesn't literally mean you can't be very sure about anything; the message is that science is wrong very often and you shouldn't bet too much on the latest theory. So even if it's a complete misquote, it's a nice thought.
--Delmore Schwartz, "Calmly We Walk Through This April's Day"; quoted by Mike Darwin on the GRG ML
--Alfred Korzybski Science and Sanity Page 376 (1933)
Interesting, if indeed it is true. I'm not sure how this is supposed to be a rationality quote though.
It a quote about thinking about how to think. It not the standard way of thinking around here but thinking interesting thoughts about thinking encourages rationality.
In sense that you should be searching for the truth in both directions
-- B. F. Skinner, Beyond Freedom and Dignity
Very close. I'd perhaps suggest that a person is less dignified when desperately seeking a reward that certainly isn't going to come.
Rudyard Kipling, The Jungle Book
I haven't studied quantum mechanics in any depth at all. The meaning I, as a layman, derive from this statement is: in the formal QM system a particle has no property labelled "position". There is perhaps an emergent property called position, but it is not fundamental and is not always well defined, just like there are no ice-cream atoms. Is this wrong?
Yes, it's wrong. In the QM formalism position is a fundamental property. However, the way physical properties work is very different from classical mechanics (CM). In CM, a property is basically a function that maps physical states to real numbers. So the x-component of momentum, for instance, is a function that takes a state as input and spits out a number as output, and that number is the value of the property for that state. Same state, same number, always. This is what it means for a property to have a well-defined value for every state.
In QM, physical properties are more complicated -- they're linear operators, if you want a mathematically exact treatment. But here's an attempt at an intuitive explanation: There are some special quantum states (called eigenstates) for which physical properties behave pretty much like they do in CM. If the particle is in one of those states, then the property takes the state as input and basically just spits out a number. Whenever the particle is in that state, you get the same number. For those states, the property does have a well-defined value.
But the problem in QM is that those are not the only states there are. There are other states as well. These states are linear combinations of the eigenstates, i.e. they correspond to sums of eigenstates (states in QM are basically just vectors, so you can sum them together). These linear combinations are not themselves eigenstates. When you input them into the property, it spits out multiple numbers, not just one. In fact it spits out all the numbers corresponding to each of the eigenstates that are summed together to form our linear combination state. So if A and B are eigenstates for which the property in question spits out numbers a and b respectively, then for the combined state A + B, the property will spit out both a and b -- two numbers, not just one.
So the property isn't just a simple function from states to numbers; for some states you end up with more than one number. And which of those numbers do you see when you make a measurement? Well, that depends on your interpretation. In collapse theories, for instance, you see one of the numbers chosen at random. In MWI, the world branches and each one of those numbers is seen on a separate branch. So there's the sense in which properties aren't well-defined in QM -- properties don't associate a unique number with every physical state. This is all pretty hand-wavey, I realize, but Griffiths is right. If you really want an understanding of what's going on, then you need to study QM in some depth.
Also, I should say that in MWI there is something to your claim that the position of a particle is emergent and not fundamental, but this is not so much because of the nature of the property. It's because particles themselves are emergent and non-fundamental in MWI. The universal wavefunction is fundamental.
Thanks for the detailed explanation! Now I have more fun words to remember without actually understanding :-)
Seriously, thanks for taking the time to explain that.
Sarah Hoyt
I found this to be slightly unsettling when I realized it, though we may be talking about different things.
David Chapman thinks that using LW-style Bayesianism as a theory of epistemology (as opposed to just probability) lumps together too many types of uncertainty; to wit:
I think he is correct, and LWers are overselling Bayesianism as a solution to too many problems (at the very least, without having shown it to be).
I do not see why any of Chapman's examples cannot be given appropriate distributions and modeled in a Bayesian analysis just like anything else:
Dynamical chaos? Very statistically modelable, in fact, you can't really deal with it at all without statistics, in areas like weather forecasting.
Inaccessibility? Very modelable; just a case of missing data & imputation. (I'm told that handling issues like censoring, truncation, rounding, or intervaling are considered one of the strengths of fully Bayesian methods and a good reason for using stuff like JAGS; in contrast, whenever I've tried to deal with one of those issues using regular maximum-likelihood approaches it has been... painful.)
Time-varying? Well, there's only a huge section of statistics devoted to the topic of time-series and forecasts...
Sensing/measurement error? Trivial, in fact, one of the best cases for statistical adjustment (see psychometrics) and arguably dealing with measurement error is the origin of modern statistics (the first instances of least-squared coming from Gauss and other astronomers dealing with errors in astronomical measurement, and of course Laplace applied Bayesian methods to astronomy as well).
Model/abstraction error? See everything under the heading of 'model checking' and things like model-averaging; local favorite Bayesian statistician Andrew Gelman is very active in this area, no doubt he would be quite surprised to learn that he is misapplying Bayesian methods in that area.
One’s own cognitive/computational limitations? Not just beautifully handled by Bayesian methods + decision theory, but the former is actually offering insight into the former, for example "Burn-in, bias, and the rationality of anchoring".
Expanding further on my previous reply, I believe that the claimed (by Gelman and Shalizi) non-Bayesian nature of model-checking is wrong: the truth is that everything that goes under the name of model-checking works, to the extent that it does, so far as it approximates the underlying Bayesian structure. It is not called Bayesian, because it is not an actual, numerical use of Bayes theorem, and the reason we are not doing that is because we do not know how: in practice we cannot work with universal priors.
So Bayesian ideas are applicable to the problem of model/abstraction error, but we cannot apply them numerically. In fact, that is pretty much what model/abstraction error means -- if we did have numbers, they would be part of the model. Model checking is what we do when we cannot calculate any further with numerical probabilities.
Cf. my analogy here with understanding thermodynamics.
I believe that would be Eliezer's response to Gelman and Shalizi. I would not expect them to be convinced though. Shalizi would probably dismiss the idea as moonshine and absurdity.
ETA: Eliezer on the subject:
ETA: Why is the grandparent at -4? David Chapman and simplicio may be wrong about this, but neither are saying anything stupid, or so much thrashed out in the past as to not merit further words.
Judging by the abstract I assume you meant to write, the latter is offering insight into the former?
Agreed about chaos, missing data, time series, and noise, but I think the next is off the mark:
He might be surprised to be described as applying Bayesian methods at all in that area. Model checking, in his view, is an essential part of "Bayesian data analysis", but it is not itself carried out by Bayesian methods. The strictly Bayesian part -- that is, the application of Bayes' theorem -- ends with the computation of the posterior distribution of the model parameters given the priors and the data. Model-checking must (he says) be undertaken by other means because the truth may not be in the support of the prior, a situation in which the strict Bayesian is lost. From "Philosophy and the practice of Bayesian statistics", by Gelman and Shalizi (my emphasis):
...
If anyone's itching to say "what about universal priors?", Gelman and Shalizi say that in practice there is no such thing. The idealised picture of Bayesian practice, in which the prior density is non-zero everywhere, and successive models come into favour or pass out of favour by nothing more than updating from data by Bayes theorem, is, they say, unworkable.
They liken the process to Kuhnian paradigm-shifting:
but find Popperian hypothetico-deductivism a closer fit:
For Gelman and Shalizi, model checking is an essential part of Bayesian practice, not because it is a Bayesian process but because it is a necessarily non-Bayesian supplement to the strictly Bayesian part: Bayesian data analysis cannot proceed by Bayes alone. Bayes proposes; model-checking disposes.
I'm not a statistician and do not wish to take a view on this. But I believe I have accurately stated their view. The paper contains some references to other statisticians who, they says are more in favour of universal Bayesianism, but I have not read them.
Loath as I am to disagree with Gelman & Shalizi, I'm not convinced that the sort of model-checking they advocate such as posterior p-values are fundamentally and in principle non-Bayesian, rather than practical problems. I mostly agree with "Posterior predictive checks can and should be Bayesian: Comment on Gelman and Shalizi,'Philosophy and the practice of Bayesian statistics'", Kruschke 2013 - I don't see why that sort of procedure cannot be subsumed with more flexible and general models in an ensemble approach, and poor fits of particular parametric models found automatically and posterior shifted to more complex but better fitting models. If we fit one model and find that it is a bad model, then the root problem was that we were only looking at one model when we knew that there were many other models but out of laziness or limited computations we discarded them all. You might say that when we do an informal posterior predictive check, what we are doing is a Bayesian model comparison of one or two explicit models with the models generated by a large multi-layer network of sigmoids (specifically <80 billion of them)... If you're running into problems because your model-space is too narrow - expand it! Models should be able to grow (this is a common feature of Bayesian nonparametrics).
This may be hard in practice, but then it's just another example of how we must compromise our ideals because of our limits, not a fundamental limitation on a theory or paradigm.
Unless there's been an enormous breakthrough in the past 2 years, I believe this is still a major unsolved problem. Also decision theory is about cooperating with other agents, not overcoming cognitive limitations.
Note that I was speaking of "Bayesianism" as practiced on LW, not of Bayesian statistics the academic field. I do not believe these are the same.
I believe Chapman is writing a more detailed critique of what he sees here; I will be sure to link you to it when it comes.
I think that's absurd if that's what he really means. Just because we are not daily posting new research papers employing model-averaging or non-parametric Bayesian statistics does not mean that we do not think those techniques are useful and incorporated in our epistemology or that we would consider the standard answers correct, and this argument can be applied to any area of knowledge that LWers might draw upon or consider correct. If we criticize p-values as a form of building knowledge, is that not a part of 'Bayesian epistemology' because we are drawing arguments from Jaynes or Ioannidis and did not invent them ab initio?
'Your physics can't deal with modeling subatomic interactions, and so sadly your entire epistemology is erroneous.' '??? There's a huge and extremely successful area of physics devoted to that, and I have no freaking idea what you are talking about. Are you really as ignorant and superficial as you sound like, in listing as a weakness something which is actually a major strength of the physics viewpoint?' 'Oh, but I meant physics as practiced on LessWrong! Clearly that other physics is simply not relevant. Come back when LW has built its own LHC and replicated all the standard results in the field, and then I'll admit that particle physics as practiced on LW is the same thing as particle physics the academic field, because otherwise I refuse to believe they can be the same.'
I think you're not being charitable again. Consider the difference between physics as practiced by quantum woo mystics, and physics as practiced by physicists or even engineers. I think that simplicio is referring to a similar (though less striking) tendency for the representative LWer to quasi-religiously misapply and oversell probability theory (which may or may not be the case, but should be argued with something other than uncharitable ridicule).
I think you may be extrapolating much too far from the quote I posted. Also, my statistics level is well below both yours and Chapman's so I am not a good interlocutor for you.
I don't think I am. It's a very simple quote: "here is a list of n items Bayesian statistics and hence epistemology cannot handle; therefore, it cannot be right." And it's dead wrong because all n items are handled just fine.
I think you are being uncharitable. The list was of different types of uncertainty that Bayesians treat as the same, with a side of skepticism that they should be handled the same, not things you can't model with bayesian epistemology.
The question is not whether Bayes can handle those different types of uncertainty, it's whether they should be handled by a unified probability theory.
I think the position that we shouldn't (or don't yet) have a unified uncertainty model is wrong, but I don't think it's so stupid as to be worth getting heated about and being uncivil.
Did somebody solve the problem of logical uncertainty while I wasn't looking?
I disagree that Gwern is being uncivil. I don't think Chapman has any ground to criticize LW-style epistemology when he's made it abundantly clear he has no idea what it is supposed to be. (Indeed, that's his principal criticism: the people he's talked to about it tell him different things.)
It'd be like if Berkeley asked a bunch of Weierstrass' first students about their "supposed" fix for infinitesimals. Because the students hadn't completely grasped it yet, they gave Berkeley a rope, a rubber hose, and a burlap sack instead of giving him the elephant. Then Berkeley goes and writes a sequel to the Analyst disparaging this "new Calculus" for being incoherent.
In that world, I think Berkeley's the one being uncivil.
gwern, I am curious. You do a lot of practical data analysis. How often do you use non-Bayesian methods?
Pretty frequently (if you'll pardon the pun). Almost all papers are written using non-Bayesian methods, people expect results in non-Bayesian terms, etc.
Besides that: I decided years ago (~2009) that as appealing as Bayesian approaches were to me, I should study 'normal' statistics & data analysis first - so I understood them and why I didn't want to use them before I began studying Bayesian statistics. I didn't want to wind up in a situation where I was some sort of Bayesian fanatic who could tell you how to do a Bayesian analysis but couldn't explain what was wrong with the regular approach or why Bayesian approaches were better!
(I think I'm going to be switching gears relatively soon, though: I'm working with a track coach on modeling triple-jumping performance, and the smallness of the data suggests it'll be a natural fit for a multilevel model using informative priors, which I'll want to read Gelman's textbook on, and that should be a good jumping off point.)
Random question - if you were to recommend a textbook or two, from frequentist and Bayesian analysis both, to a random interested undergraduate...
(As you might guess, not a hypothetical, unfortunately.)
I believe you are posting this in the wrong thread.
-The Great Learning, one of the Four Books and Five Classics of Confucian thought.
I see small examples everywhere I look; they're just too specific to point the way to a general solution.
James Portnow/Daniel Floyd
-- Will Wildman, analysis of Ender's Game
-- Graduate student of our group, recognising a level above his own in a weekly progress report
Now I'm curious about the context...
It wasn't very interesting - some issue of how to make one piece of software talk to the code you'd just written and then store the output somewhere else. Not physics, just infrastructure. But the recognition of the levels was interesting, I thought. Although I do believe "literally five seconds" is likely an exaggeration.
This is good, although when I read the comic I find myself interpreting Eye as valuing curiosity for curiosity's sake alone,in direct opposition to valuing truth, which I can't really get behind and leads to me siding with the old man.
George Bernard Shaw
"Life is about creating yourself" still might be problematic because the emphasis is still on what sort of person you are.
As opposed to what? I would guess maybe a better concept is what you're able to get done...
I think the implied contrast is between "creating yourself" and "what you do" or the less pretty but more precise "doing your actions." The first implies a smaller, more rigid set than the last, which is perhaps not the correct way to perceive life.
I agree with the thought, but I find the attribution implausible. "Finding yourself" sounds like modern pop-psych, not a phrase that GBS would ever have written. Google doesn't turn up a source.
Google nGram suggests that "Finding yourself" wasn't a phrase that was really in use before the 1960 albeit there a short uptick in 1940. Given that you need some time for criticism and Shaw died in 1950, I think it's quite clear that this quote is to modern for him. Although maybe post-modern is a more fitting word?
The timeframe seems to correspond with the rise of post-modern thought. If you suddenly start deconstructing everything you need to find yourself again ;)
I think you are right that it is difficult to find the exact source. I came upon this quotation in the book Up where the author quoted Bernard Shaw. Google gave me http://www.goodreads.com/author/quotes/5217.George_Bernard_Shaw, but no article or play was indicated as a source of this quote.
-- Iain M. Banks
This seems like a poor strategy by simply considering temper tantrums, let alone all of the other holes in this. (The first half of the comment though, I can at least appreciate.)
I, too, support the cause of opposing every such cause.
Is that both, or either or? Because if it is either or it may include such attrocities as going to bed on time and eating vegetables. If it is both, it seems to imply killing those not as beloved by children may be acceptable.
I wonder if people here realize how anti-utilitarianism this quote is :-)
You seem to be implying that people here should care about things being anti-utilitarianism. They shouldn't. Utilitarianism refers to a group of largely abhorrent and arbitrary value systems.
It is also contrary to virtually all consequentialist value systems of the kind actually held by people here or extrapolatable from humans. All consequentialist systems that match the quote's criteria for not being 'Fucked' are abhorrent.
It is not. "Murder and children crying" here are not means to an end, they are consequences as well. Maybe not intended consequences, maybe side effects ("collateral damage"), but still consequences.
I see no self-contradiction in a consequentialist approach which just declares certain consequences (e.g. "murder and children crying") be be unacceptable.
Your point is perfectly valid, I think. Every action-guiding set of principles is ultimately all about consequences. Deontologies can be "consequentialized", i.e. expressed only through a maximization (or minimization) rule of some goal-function, by a mere semantic transformation. The reason why this is rarely done is, I suspect, because people get confused by words, and perhaps also because consequentializing some deontologies makes it more obvious that the goals are arbitrary or silly.
The traditional distinction between consequentialism and non-consequentialism does not come down to the former only counting consequences -- both do! The difference is rather about what sort of consequences count. Deontology also counts how consequences are brought about, that becomes part of the "consequences" that matter, part of whatever you're trying to minimize. "Me murdering someone" gets a different weight than "someone else murdering someone", which in turn gets a different weight from "letting someone else die through 'natural causes' when it could be easily prevented".
And sometimes it gets even weirder, the doctrine of double effect for instance draws a morally significant line between a harmful consequence being necessary for the execution of your (well-intended) aim, or a "mere" foreseen -- but still necessary(!) -- side-effect of it. So sometimes certain intentions, when acted upon, are flagged with negative value as well.
And as you note below, deontologies sometimes attribute infinite negative value to certain consequences.
Pardon me. I left off the technical qualifier for the sake of terseness. I have previously observed that all deontologial value systems can be emulated by (suitably contrived) consequentialist value systems and vice-versa so I certainly don't intend to imply that it is impossible to construct a consequentialist morality implementing this particular injunction. Edited to fix.
There is nothing about consequentialism which distinguishes means from ends. Anything that happens is an "end" of the series of actions which produced it, even if it is not a terminal step, even if it is not intended.
When wedrifid says that the quote is "anti-consequentialism", they are saying that it refuses to weigh all of the consequences - including the good ones. The negativity of children made to cry does not obliterate the positivity of children prevented from crying, but rather must be weighed against it, to produce a sum which can be negative or positive.
To declare a consequence "unacceptable" is to say that you refuse to be consequentialist where that particular outcome is involved; you are saying that such a consequence crashes your computation of value, as if it were infinitely negative and demanded some other method of valuation, which did not use such finicky things as numbers.
But even if there is a value which is negative, and 3^^^3 times greater in magnitude than any other value, positive or negative, its negation will always be of equal and opposite value, allowing things to be weighed against each other once again. In this example, a murder might be worth -3^^^3 utilons - but preventing two murders by committing one results in a net sum of +3^^^3 utilons.
The only possible world in which one could reject every possible cause which ends in murder or children crying is one in which it is conveniently impossible for such a cause to lead to positive consequences which outweigh the negative ones. And frankly, the world we live in is not so convenient as to divide itself perfectly into positive and negative acts in such a way.
Wikipedia: Consequentialism is the class of normative ethical theories holding that the consequences of one's conduct are the ultimate basis for any judgment about the rightness of that conduct. ... Consequentialism is usually distinguished from deontological ethics (or deontology), in that deontology derives the rightness or wrongness of one's conduct from the character of the behaviour itself rather than the outcomes of the conduct.
The "character of the behaviour" is means.
Consequentialism does not demand "computation of value". It only says that what matters it outcomes, it does not require that the outcomes be comparable or summable. I don't see that saying that certain outcomes are unacceptable, full stop (= have negative infinity value) contradicts consequentialism.
You have a point, there are means and ends. I was using the term "means" as synonymous with "methods used to achieve instrumental ends", which I realize was vague and misleading. I suppose it would be better to say that consequentialism does not concern itself with means at all, and rather considers every outcome, including those which are the result of means, to be an end.
As for your other point, I'm afraid that I find it rather odd. Consequentialism does not need to be implemented as having implicitly summable values, much as rational assessment does not require the computation of exact probabilities, but any moral system must be able to implement comparisons of some kind. Even the simplest deontologies must be able to distinguish "good" from "bad" moral actions, even if all "good" actions are equal, and all "bad" actions likewise.
Without the ability to compare outcomes, there is no way to compare the goodness of choices and select a good plan of action, regardless of how one defines "good". And if a given outcome has infinitely negative value, than its negation must have infinitely positive value - which means that the negation is just as desirable as the original outcome is undesirable.
"Murder and children crying" aren't allowed to have negative weight in a utility function?
It's not about weight, it's about an absolute, discontinuous, hard limit -- regardless of how many utilons you can pile up on the other end of the scale.
Well, no. It's against the promise of how many utilons you can pile up on the other arm of the scale, which may well not pay off at all. I'm reminded of a post here at some point whose gist was "if your model tells you that your chances of being wrong are 3^^^3:1 against, it is more likely that your model is wrong than that you are right."
Yes, but the quote in no way concerns itself with the probability that such a plan will go wrong; rather, it explicitly includes even those with a wide margin of error, including "every" plan which ends in murder and children crying.
If your plan ends in murder and children crying, what happens if your plan goes wrong?
The murder and children crying fail to occur in the intended quantity?
If your plan requires you to get into a car with your family, what happens if you crash?
Well, getting into a car with your family is not inherently bad, so it's not a very good parallel... but if your overall point is that "expected value calculations do not retroactively lose mathematical validity because the world turned out a certain way", then that's definitely true.
I think that the "what if it all goes wrong" sort of comment is meant to trigger the response of "oh god... it was all for nothing! Nothing!!!". Which is silly, of course. We murdered all those people and made those children cry for the expected value of the plan. Complaining that the expected value of an action is not equal to the actual value of the outcome is a pretty elementary mistake.
The features of my plan which mitigate the result of the plan going wrong kick in, and the damage is mitigated. I don't go on vacation, despite the nonrefundable expenses incurred. The plan didn't end in death and sadness, even if a particular implementation did.
When the plan ends in murder and children crying, every failure of the plan results in a worse outcome.
This does not seem to follow. Failure of the plan could easily involve failure to cause the murder or crying to happen for a start. Then there is the consideration that an unspecified failure has completely undefined behaviour. Anything could happen, from extinction or species-wide endless torture to the outright creation of a utopia.
It's not a matter of "the plan might go wrong", it's a matter of "the plan might be wrong", and the universal part comes from "no, really, yours too, because you aren't remotely special."
Seems like one of those rules that apply to humans but not to a perfect rationalist, then.
Sounds about right to me.
As much as I love Banks, this sounds like a massive set of applause lights, complete with sparkling Catherine wheels. Sometimes, you have to do shitty things to improve the world, and sometimes the shitty things are really shitty, because we're not smart enough to find a better option fast enough to avoid the awful things resulting from not improving at all. "The perfect must not be the enemy of the good" and so on.
And sometimes you do shitty things because you think they will improve the world, but hey, even though the road to hell is very well-paved already, there's always a place for another cobblestone...
The heuristic of this quote is that it is a firewall against a runaway utility function. If you convince yourself that something will generate gazillions of utilons, you'd be willing to pay a very high price to reach this, even though your estimates might be in error. This heuristic puts a cap on the price.
The problem is that there are better heuristics out there. Look up "just war theory" for starters.
It's good as an exhortation to build a Schelling fence, but without that sentiment, it's pretty hollow. Reading the context, though, I agree with you: it's a reminder that feeling really sure about something and being willing to sacrifice a lot of you and other (possibly unwilling) people to create a putative utopia probably means you're wrong.
"Sorrow be damned, and all your plans. Fuck the faithful, fuck the committed, the dedicated, the true believers; fuck all the sure and certain people prepared to maim and kill whoever got in their way; fuck every cause that ended in murder and a child screaming. She turned and ran..."
(As an aside, I now have the perfect line for if I ever become an evil mastermind and someone quotes that at me: "But you see, murder and children screaming is only the beginning!")
That's kind-of a good point, but I seriously doubt that that quote would be that effective in making people get it who don't already.
This seems better-suited for MoreEmotional than LessWrong.
I think this is a useful heuristic because humans are just not good at calculating this stuff. Ethical Injunctions suggests that you do in fact check with your emotions when the numbers say something novel. (This is why I'm sceptical about deciding on numbers pulled out of your arse rather than pulling the decision directly out of your arse.)
I don't think Banks even believed that, though. Several of his books certainly seem to be evidence to the contrary.
I suppose I somewhat appreciate the sentiment. I note that labelling the killing 'murder' has already amounted to significant discretion. Killings that are approved of get to be labelled something nicer sounding.
Does this pay rent in policy changes? It seems probable that existing policy positions will already determine the contexts in which we might choose to apply this quote, so that the quote will only be generating the appearance of additional evidential weight, but will in fact result in double-counting if we use its applicability as evidence for or against a proposal, because we already chose to use the quote because we disagreed with the porposal. For example: 'This imperialist intervention is wrong—Fuck every cause that ends in murder and children crying.' Is the latter clause doing any work?
(First version of this comment:
Does this pay rent in suggested policies? It feels like under all plausible interpretations, it's at best 'I'm so righteous!' and possibly other things.)
Yes. It rules out all sorts of policies, including good ones. It likely rules out murdering Hitler to prevent a war, especially if that requires killing guards in order to get to him.
Upvoted; wording was bad. Edited.
I agree entirely with your new wording. This quote seems to be the sort of claim to bring out conditionally against causes we oppose but conveniently ignore when we support the cause.
Fixed, thanks.
Harry Potter and the Confirmed Critical, Chapter 6
Can you give a link to this story? It is surprisingly difficult to find.
If you put the quote into quotation marks and search Google, it's the fifth hit.
thank you. This was a 'duh!' moment; I haven't realized it was the 2nd book of the Natural 20.
It is the second book in the series Harry Potter and the Natural 20, which can be found here.
'Then he posed a question that, obvious as it seems, had not really occurred to me: “What makes you think that UFOs are a scientific problem?”
I replied with something to the effect that a problem was only scientific in the way it was approached, but he would have none of that, and he began lecturing me. First, he said, science had certain rules. For example, it has to assume that the phenomena it is observing is natural in origin rather than artificial and possibly biased. Now the UFO phenomenon could be controlled by alien beings. “If it is,” added the Major, “then the study of it doesn’t belong to science. It belongs to Intelligence.” Meaning counterespionage. And that, he pointed out, was his domain. *
“Now, in the field of counterespionage, the rules are completely different.” He drew a simple diagram in my notebook. “You are a scientist. In science there is no concept of the ‘price’ of information. Suppose I gave you 95 per cent of the data concerning a phenomenon. You’re happy because you know 95 per cent of the phenomenon. Not so in intelligence. If I get 95 per cent of the data, I know that this is the ‘cheap’ part of the information. I still need the other 5 percent, but I will have to pay a much higher price to get it. You see, Hitler had 95 per cent of the information about the landing in Normandy. But he had the wrong 95 percent!”
“Are you saying that the UFO data we us to compile statistics and to find patterns with computers are useless?” I asked. “Might we be spinning our magnetic tapes endlessly discovering spurious laws?”
“It all depends on how the team on the other side thinks. If they know what they’re doing, there will be so many cutouts between you and them that you won’t have the slightest chance of tracing your way to the truth. Not by following up sightings and throwing them into a computer. They will keep feeding you the information they want you to process. What is the only source of data about the UFO phenomenon? It is the UFOs themselves!”
Some things were beginning to make a lot of sense. “If you’re right, what can I do? It seems that research on the phenomenon is hopeless, then. I might as well dump my computer into a river.”
“Not necessarily, but you should try a different approach. First you should work entirely outside of the organized UFO groups; they are infiltrated by the same official agencies they are trying to influence, and they propagate any rumour anyone wants to have circulated. In Intelligence circles, people like that are historical necessities. We call them ‘useful idiots’. When you’ve worked long enough for Uncle Sam, you know he is involved in a lot of strange things. The data these groups get is biased at the source, but they play a useful role.
“Second, you should look for the irrational, the bizarre, the elements that do not fit...Have you ever felt that you were getting close to something that didn’t seem to fit any rational pattern yet gave you a strong impression that it was significant?”'
If UFOs are controlled by a non-human intelligence, assuming they'll behave like human schemes is as pointless as assuming they'll behave like natural phenomena. But of course the premise is false and the Major's approach is correct.
A creature that can build a spaceship is probably closer to oe that can build a plane than it is to a rock at least, you have to start somewhere.
Gregory (Scotland Yard detective): “Is there any other point to which you would wish to draw my attention?”
Holmes: “To the curious incident of the dog in the night-time.”
Gregory: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”
I like it when I hear philosophy in rap songs (or any kind of music, really) that I can actually fully agree with:
-- Vince Staples, "Versace Rap"
It's quite sad that Tupac Shakur is the focus of so many conspiracy theories, because he was quite the sceptic about wasting your time on this stuff when there was real work to do making the world better.
I always thought it was interesting that Tupac got all the conspiracy theories while Biggie got none, despite the fact that Biggie released an album called Ready to Die, died, then two weeks later released an album called Life After Death. It's probably because Tupac's music appeals more to hippie types who are into this kind of stuff.
Turkish proverb
http://pbfcomics.com/72/
Stephen Jay Gould
There was only one Ramanujan; and we are all well-aware of Gould's views on intelligence here, I presume.
In what reference class?
I chose Ramanujan as my example because mathematics is extremely meritocratic, as proven by how he went from poor/middle-class Indian on the verge of starving to England on the strength of his correspondence & papers. If there really were countless such people, we would see many many examples of starving farmers banging out some impressive proofs and achieving levels of fame somewhat comparable to Einstein; hence the reference class of peasant-Einsteins must be very small since we see so few people using sheer brainpower to become famous like Ramanujan.
(Or we could simply point out that with average IQs in the 70s and 80s, average mathematician IQs closer to 140s - or 4 standard deviations away, even in a population of billions we still would only expect a small handful of Ramanujans - consistent with the evidence. Gould, of course, being a Marxist who denies any intelligence, would not agree.)
It would naively seem that an IQ of 160 or more is 5 SDs from 85 , but 4SDs from the 100 , so the rarity would be 1/3,483,046 vs 1/31,560 , for a huge ratio of 110 times prevalence of extreme genius between the populations.
Except that this is not how it works when the IQ of 100 population has been selected from the other and subsequently has lower variance. Nor is it how Flynn effect worked. Because, of course, the standard deviation is not going to remain constant.
pg169-171, Kanigel's 1991 The Man Who Knew Infinity:
Personally, having finished reading the book, I think Kanigel is wrong to think there is so much contingency here. He paints a vivid picture of why Ramanujan had failed out of school, lost his scholarships, and had difficulties publishing, and why two Cambridge mathematicians might mostly ignore his letter: Ramanujan's stubborn refusal to study non-mathematical topics and refusal to provide reasonably rigorous proofs. His life could have been much easier if he had been less eccentric and prideful. That despite all his self-inflicted problems he was brought to Cambridge anyway is a testimony to how talent will out.
Isn't the average IQ 100 by definition?
Yes - but whose average?
Presumably the people who write the IQ test, based on whatever population sample they use to calibrate it. Is the point that the average IQ in India is 70-80, as opposed to the average in the US? (This could be technically true on an IQ test written in the US, without being meaningful, or it could be actually true because of nutrition or whatever). What data does the number 70-80 actually come from?
Presumably from this list.
I think it can be illustrative, as a counter to the spotlight effect, to look at the personalities of math/science outliers who come from privileged backgrounds, and imagine them being born into poverty. Oppenheimer's conjugate was jailed or executed for attempted murder, instead of being threatened with academic probation. Gödel's conjugate added a postscript to his proof warning that the British Royal Family were possible Nazi collaborators, which got it binned, which convinced him that all British mathematicians were in on the conspiracy. Newton and Turing's conjugates were murdered as teenagers on suspicion of homosexuality. I have to make these stories up because if you're poor and at all weird, flawed, or unlucky your story is rarely recorded.
A gross exaggeration; execution was never in the cards for a poisoned apple which was never eaten.
Likewise. Goedel didn't go crazy until long after he was famous, and so your conjugate is in no way showing 'privilege'.
Likewise. You have some strange Whiggish conception of history where all periods were ones where gays would be lynched; Turing would not have been lynched anymore than President Buchanan would have, because so many upper-class Englishmen were notorious practicing gays and their boarding schools Sodoms and Gomorrahs. To remember the context of Turing's homosexuality conviction, this was in the same period where highly-placed gay Englishman after gay Englishman was turning out to be Soviet moles (see the Cambridge Five and how the bisexual Kim Philby nearly became head of MI6!) EDIT: pg137-144 of the Ramanujan book I've been quoting discusses the extensive homosexuality at Cambridge and its elite, and how tolerance of homosexuality ebbed and flowed, with the close of the Victorian age being particularly intolerant.
The right conjugate for Newton, by the way, reads 'and his heretical Christian views were discovered, he was fired from Cambridge - like his successor as Lucasian Professor - and died a martyr'.
The problem is, we have these stories. We have Ramanujan who by his own testimony was on the verge of starvation - and if that is not poor, then you are not using the word as I understand it - and we have William Shakespeare (no aristocrat he), and we have Epicurus who was a slave. There is no censorship of poor and middle-class Einsteins. And this is exactly what we would expect when we consider what it takes to be a genius like Einstein, to be gifted in multiple ways, to be far out on multiple distributions (giving us a highly skewed distribution of accomplishment, see the Lotka curve): we would expect a handful of outliers who come from populations with low means, and otherwise our lists to be dominated by outliers from populations with higher means, without any appeal to Marxian oppression or discrimination necessary.
Do you really think the existence of oppression is a figment of Marxist ideology? If being poor didn't make it harder to become a famous mathematician given innate ability, I'm not sure "poverty" would be a coherent concept. If you're poor, you don't just have to be far out on multiple distributions, you also have to be at the mean or above in several more (health, willpower, various kinds of luck). Ramanujan barely made it over the finish line before dying of malnutrition.
Even if the mean mathematical ability in Indians were innately low (I'm quite skeptical there), that would itself imply a context containing more censoring factors for any potential Einsteins...to become a mathematician, you have to, at minimum, be aware that higher math exists, that you're unusually good at it by world standards, and being a mathematician at that level is a viable way to support your family.
On your specific objections to my conjugates...I'm fairly confident that confessing to poisoning someone else's food usually gets you incarcerated, and occasionally gets you killed (think feudal society or mob-ridden areas), and is at least a career-limiting move if you don't start from a privileged position. Hardly a gross exaggeration. Goedel didn't become clinically paranoid until later, but he was always the sort of person who would thoughtlessly insult an important gatekeeper's government, which is part of what I was getting at; Ramanujan was more politic than your average mathematician. I actually was thinking of making Newton's conjugate be into Hindu mysticism instead of Christian but that seemed too elaborate.
The specific oppressions you led off with: yes.
I thought we were talking about Oppenheimer and Cambridge? It looks like if Oppenheimer hadn't had rich parents who lobbied on his behalf, he might have gotten probation instead of not. Given his instability, that might have pushed him into a self-destructive spiral, or maybe he just would have progressed a little slower through the system. So, yes, jumping from "the university is unhappy" to "the state hangs you" is a gross exaggeration. (Universities are used to graduate students being under a ton of stress, and so do cut them slack; the response to Oppenheimer of "we think you need to go on vacation, for everyone's safety" was 'normal'.)
<snark>"Oppenheimer wasn't privileged, he was only treated slightly better than the average Cambridge student."</snark>
I'm sorry, I never really rigorously defined the counter-factuals we were playing with, but the fact that Oppenheimer was in a context where attempted murder didn't sink his career is surely relevant to the overall question of whether there are Einsteins in sweatshops.
I don't see the relevance, because to me "Einsteins in sweatshops" means "Einsteins that don't make it to <Cambridge>", for some Cambridge equivalent. If Ramanujan had died three years earlier, and thus not completed his PhD, he would still be in the history books. I mean, take Galois as an example: repeatedly imprisoned for political radicalism under a monarchy, and dies in a duel at age 20. Certainly someone ruined by circumstances--and yet we still know about him and his mathematical work.
In general, these counterfactuals are useful for exhibiting your theory but not proving your theory. Either we have the same background assumptions- and so the counterfactuals look reasonable to both of us- or we disagree on background assumptions, and the counterfactual is only weakly useful at identifying where the disagreement is.
I'm perfectly happy to accept the existence of oppression, but I see no need to make up ways in which the oppression might be even more awful than one had previously thought. Isn't it enough that peasants live shorter lives, are deprived of stuff, can be abused by the wealthy, etc? Why do we need to make up additional ways in which they might be opppressed? Gould comes off here as engaging in a horns effect: not only is oppression bad in the obvious concrete well-verified ways, it's the Worst Thing In The World and so it's also oppressing Einsteins!
Not what Gould hyperbolically claimed. He didn't say that 'at the margin, there may be someone who was slightly better than your average mathematician but who failed to get tenure thanks to some lingering disadvantages from his childhood'. He claimed that there were outright historic geniuses laboring in the fields. I regard this as completely ludicrous due both to the effects of poverty & oppression on means & tails and due to the pretty effective meritocratic mechanisms in even a backwater like India.
It absolutely is. Don't confuse the fact that there are quite a few brilliant Indians in absolute numbers with a statement about the mean - with a population of ~1.3 billion people, that's just proving the point.
The talent can manifest as early as arithmetic, which is taught to a great many poor people, I am given to understand.
Really? Then I'm sure you could name three examples.
Sorry, I can only read what you wrote. If you meant he lacked tact, you shouldn't have brought up insanity.
Really? Because his mathematician peers were completely exasperated at him. What, exactly, was he politic about?
And this part seems entirely plausible. American slaves had no opportunity to become famous mathematicians unless they escaped, or chanced to have an implausibly benevolent Dumbledore of an owner.
Gould makes a much stronger claim, and I attach little probability to the part about the present day. But even there, you're ignoring one or two good points about the actions of famous mathematicians. Demanding citations for 'trying to kill people can ruin your life' seems frankly bizarre.
Wait, what are you saying here? That there aren't any Einsteins in sweatshops in part because their innate mathematical ability got stunted by malnutrition and lack of education? That seems like basically conceding the point, unless we're arguing about whether there should be a program to give a battery of genius tests to every poor adult in India.
Not all of them, I don't think. And then you have to have a talent that manifests early, have someone in your community who knows that a kid with a talent for arithmetic might have a talent for higher math, knows that a talent for higher math can lead to a way to support your family, expects that you'll be given a chance to prove yourself, gives a shit, has a way of getting you tested...
Just going off Google, here: People being incarcerated for unsuccessful attempts to poison someone: http://digitaljournal.com/article/346684 http://charlotte.news14.com/content/headlines/628564/teen-arrested-for-trying-to-poison-mother-s-coffee/ http://www.ksl.com/?nid=148&sid=85968
Person being killed for suspected unsuccessful attempt to poison someone: http://zeenews.india.com/news/bihar/man-lynched-for-trying-to-poison-hand-pump_869197.html
I was trying to elegantly combine the Incident with the Debilitating Paranoia and the Incident with the Telling The Citizenship Judge That Nazis Could Easily Take Over The United States. Clearly didn't completely come across.
He was politic enough to overcome Vast Cultural Differences enough to get somewhat integrated into an insular community. I hang out with mathematicians a lot; my stereotype of them is that they tend not to be good at that.
I don't think Epicurus was a slave. He did admit slaves to his school though, which is not something that was typical for his time. Perhaps you are referring to the Stoic, Epictetus, who definitely was a slave (although, white-collar).
Whups, you're right. Some of the Greek philosophers' names are so easy to confuse (I still confuse Xenophanes and Xenophon). Well, Epictetus was still important, if not as important as Epicurus.