1 min read

7

I'd like to see book reviews of books of interest to LW.  Some suggestions:

  • Dan Ariely (2010).  The Upside of Irrationality: The unexpected benefits of defying logic at work and at home.
  • Sam Harris (2010).  The Moral Landscape: How science can determine human values.
  • Dan Ariely (2009).  Predictably Irrational: The Hidden Forces That Shape Our Decisions.
  • Timothy Harris (2010).  The Science of Liberty: Democracy, Reason, and the Laws of Nature.
  • Joel Garreau (2005).  Radical Evolution.  Book about genetic mods, intelligence enhancement, and the singularity.

ADDED:  I don't mean I'd like to see reviews in this thread.  I'd like each review to have its own thread.  In discussion or on the "new" page is up to you.

New Comment
22 comments, sorted by Click to highlight new comments since:
[-]Larks140

Would people be interested in a review of Portfolios of the Poor; How the World's Poor Live on $2 a Day? link It basically gives a qualitative account of how really poor people manage their incomes; useful for anyone considering philanthropy. I’ve just read it, and have to write a summary of it anyway.

Well, yeah!

I plan to do produce a review of the recent book Selfish Reasons To Have More Kids for less wrong which summarizes the statistical literature on heredity vs. parenting on life outcomes. It argues that that contrary to intuition, typical differences in parenting have only very small effects on child life outcomes (income, education, life satisfaction, personality, etc) and that this implies that parents should spend less effort on trying to affect child life outcomes and more effort on making parents and children's current lives more pleasant.

Ooh. I recently read The Black Swan, by Nassim Nicholas Taleb (originally published in 2007); I read the 2nd ed. (2010).

Thesis: the world tends to be defined by high-impact events which are very hard to predict. They happen a lot more than people give them credit for.

A few examples are trends in finance, bestsellers in publishing, and the impact of the internet. Taleb observes that people tend to have ridiculous hindsight (and other) biases with respect to these sorts of events.

The only real recommendation in the book is to categorize situations you're trying to predict into four "quadrants": binary outcomes vs. outcomes with magnitude, and normally-distributed probabilities vs. non-normally-distributed probabilities. Taleb terms the normally-distributed region "Mediocristan" and the non-normally-distributed region "Extremistan". When you have non-normal distribution, and outcomes are based on magnitude, that is the "fourth quadrant" -- where you have to beware of black swans, and know that you're going to have trouble making good predictions.

He talks about the probabilistic model in Mediocristan, where the Gaussian / normal distribution applies - it tails off exponentially, rapidly reducing the probability of outliers to miniscule, so it's much easier to rely on the absence of outliers. And he gives dire warnings against applying the normal distribution to the Fourth Quadrant, though many do anyway -- for example, the Black-Scholes option valuation equation models the price of stocks as varying using a Gaussian distribution, and (in the 2nd ed.) makes the case that this was the reason for many derivatives-based financial firms collapsing in 2008.

It's a fun read for students of rationality because he talks a lot about empiricism and biases. One idea he emphasized is the "ludic fallacy" -- "ludic" meaning "game-like", and referring to people relying on excessively simplified models to make predictions, or thinking too much inside the box. Wikipedia has a better summary of this idea. It's something I don't see talked about much here, though I think it applies to rationality.

I agree with what you wrote; but I don't think you singled out what's going wrong in The Moral Landscape.

Sam Harris has an argument against absolute moral relativism. There really are absolute moral relativists out there, who say that any moral code is as good as any other, and no one should think poorly of Jeffrey Dahmer because he likes to murder men and screw their corpses. I think there are even people reading LW who think they think that. And Sam says, That kind of talk should not be admitted into the discussion. If you can't pass the bar of saying "hurting people is bad", then you shouldn't be allowed to help work out the social contract. We should all be able to agree that hurting people is bad.

Furthermore, the people who are hurting other people badly and systematically, really do believe that hurting people is bad; they just have demonstrably false beliefs, religious or political, that cause them to think that their actions are helping people in the long run. This is largely true, though I don't think Sam understands the mentality of believers as much as he thinks he does.

So, much pain and suffering could be prevented if we said, Hey, you say we need to do X because Y, so let's figure out whether Y is true... with SCIENCE!

The problem is, this is not enough material to write a book. So Harris makes his claim much more sweeping than it is. He throws people who say that different societies can have different values in together with absolute relativists, to make it seem like Harris is a lone voice crying in the wilderness. He argues, erroneously, that his one simple principle is enough to build a moral code upon. When I got to the part where Harris takes the correct objection that minimizing total harm may be unjust - and instead of agreeing, argues that minimizing total harm will happily work out to be perfectly just because deep inside everyone is unselfish and wants other people to be happy - I gave up and stopped reading.

Sam Harris has an argument against absolute moral relativism. There really are absolute moral relativists out there, who say that any moral code is as good as any other, and no one should think poorly of Jeffrey Dahmer because he likes to murder men and screw their corpses.

The way you put it obscures one extremely important difference, namely that between individuals who behave in ways that could never be a general norm in a stable and functional society and societies that are functional and stable even though their norms are extremely different from ours. As far as I see, the supposed relativists who wish to excuse Jeffrey Dahmer are just a conveniently ridiculous strawman for cultural relativism that applies only to other functional and stable societies distant in space or time, which is much more difficult (if at all possible) to refute.

Now you say:

If you can't pass the bar of saying "hurting people is bad", then you shouldn't be allowed to help work out the social contract. We should all be able to agree that hurting people is bad.

But in fact, there's going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical.

But in fact, there's going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical

This is an invalid objection. Hurting people is bad; therefore, we want to minimize hurting people. Saying "but you can't bring hurt down to zero" is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.

Also, referring to the "usual problems with utilitarianism and social engineering" literally says that there are problems with utilitarianism and social engineering (which is true), but falsely implies that (a) utilitarianism has more or even as many problems as any other approach, and (b) that attempting to optimize for something is more like "social engineering" than other alternatives.

Saying "but you can't bring hurt down to zero" is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.

You speak of "social welfare" as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen. (And even if such a definition could be agreed upon, there is still almost unlimited leeway to argue over how it could be best maximized, since we lack central planners with godlike powers.)

If we're going to discuss reworking of the social contract, I prefer straight talk about who gets to have power and status, rather than attempts to obscure this question by talking in terms of some supposedly objective, but in fact entirely ghostlike, aggregate utilities at the level of the whole society.

Also, referring to the "usual problems with utilitarianism and social engineering" literally says that there are problems with utilitarianism and social engineering (which is true), but falsely implies that (a) utilitarianism has more or even as many problems as any other approach, and (b) that attempting to optimize for something is more like "social engineering" than other alternatives.

I'd probably word it a bit differently myself, but I think (a) and (b) are in fact true.

Saying "but you can't bring hurt down to zero" is an invalid objection because it is irrelevant, and a pernicious one, because people use that form of objection routinely to defend their special interests at the cost of social welfare.

You speak of "social welfare" as if it were an objectively measurable property of the real world. In reality, there is no such thing as an objective social welfare function, and ideologically convenient definitions of it are dime a dozen.

Note the position of "social welfare" in that sentence. It's in a subordinate clause, describing a common behavior that I use as justification for taking special exception to something you said. So it's two steps removed from what we're arguing about. The important part of my sentence is the first part, "Saying 'you can't bring hurt down to zero' is an invalid objection." "Hurting people is bad" is not very controversial. You're taking a minor, tangential subordinate clause, which is unimportant and not worth defending in this context, and replying as if you were objecting to my point.

I don't mean that you're trying to do this, but this is a classic Dark Arts technique - if your goal is to say "hurting people is bad" is controversial, you instead pick out something else in the same sentence that is controversial, and point that out.

I also didn't mean to say that you are pernicious or have ill-intent - just that the objection I was replying to is one that upsets me because it is commonly used in a Dark Arts way.

I'd probably word it a bit differently myself, but I think (a) and (b) are in fact true.

Fair enough - it implies (a) and (b), whether true or false.

I say it isn't theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases. A "non-utilitarian" approach just means an incomplete approach that leaves a mostly random set of possible cases unhandled, because it doesn't produce a complete ordering of values over possible worlds. It's like having a rule that's missing most of the markings.

I say it isn't theoretically possible for utilitarianism to have more problems than any other approach, because any other approach can be recast in a utilitarian framework, and then improved by making it handle more cases.

"Improved" is a tricky word here. If you're discussing the position of an almighty god contemplating the universe, then yes, I agree. But when it comes to practical questions of human social order and coordination and arbitration of human interactions, the idea that such questions can be answered in practice by contemplating and maximizing some sort of universal welfare function, i.e some global aggregate utility, is awful hubris that is guaranteed to backfire in complete disaster -- Hayek's "fatal conceit," if you will.

To a decent first approximation, you're not allowed to use the words "hubris" and "guaranteed" in the same sentence.

A fair point, but given the facts of the matter, I'd say that the qualification "guaranteed" needs to be toned down only slightly to make the utterance reasonably modest. (And since I'm writing on LW, I should perhaps be explicit that I'm not considering the hypothetical future appearance of some superhuman intelligence, but the regular human social life and organization.)

I think what's going on is you're getting annoyed by naive applications of utilitarian reasoning such as Yvain's in the offense thread, then improperly generalizing that annoyance to even sophisticated applications.

On the contrary, it is the "sophisticated" applications that annoy me the most.

I don't think it's reasonable to get annoyed by people's opinions expressed in purely intellectual debates such as those we have here, as long as they are argued politely, honestly, and intelligently. However, out there in the real world, among the people who wield power, influence, and status, there is a great deal of hubristic and pernicious utilitarian ideas, which are dangerous exactly because they have the public image of high status and sophistication. They go under all sorts of different monikers, and can be found in all major ideological camps (their distribution is of course not random, but let's not go there). What they all have in common is this seemingly smart, sophisticated, and scientific, but in fact spectacularly delusional attitude that things can be planned and regulated on a society-wide (or even world-wide) scale by some supposedly scientific methods for maximizing various measures of aggregate welfare.

The most insane and dangerous of such ideas, namely the old-school economic central planning, is fortunately no longer widely popular (though a sizable part of the world had to be wrecked before its craziness finally became undeniable). The ones that are flourishing today are less destructive, at least in the short to medium run, but they are at the same time more difficult to counter, since the evidence of their failure is less obvious and easier to rationalize away. Unfortunately, here I would have to get into sensitive ideological issues to provide more concrete analysis and examples.

But in fact, there's going to be plenty of hurting in any realistic human society. Attempts to argue in favor of an ideology because it has a vision for minimizing (or even eliminating) hurting get into all the usual problems with utilitarianism and social engineering schemes, both theoretical and practical.

Isn't that pretty much the entire question of political philosophy? There's a reason politics is bad for rationality: it's basically about hurting people.

This list should be maintained and updated, or put in the Wiki. There are whole literatures I would like to see reviewed, like the top 100 cognitive science works of the 20th century. We could do a distributed review project, where each person chooses a book on the list, reads it, and posts a review.

I reviewed Jason Rosenhouse's book on The Monty Hall Problem on my blog. While the book is primarily about math, a fair bit of the book is about how the response of people to the Monty Hall problem reflects what heuristics we use to reason about probability. If there's interest I can post an expanded version of that review here.

I'm currently reading Ariely so I can do a review of that also if people in addition to Phil are interested.

Ward Farnsworth (2007) The Legal Analyst

My main goal in teaching my introductory Economics class is to give students a good set of mental tools for understanding the world. This semester, I had a student who already had a surprisingly good understanding of game theory and questions of knowledge and proof. As we talked after class, he mentioned that he had learned these things from a book assigned for an introductory law class. After I asked about the book, he lent it to me.

From the minute I started reading 'The Legal Analyst', I saw that it was consistently excellent. About two-thirds of it was a readable, intuitive, high-quality summary of things I already knew, and the other third was new information that I am very glad to have. After finishing the book, my professional opinion is that it is extraordinarily good. Anyone who studies it will be a much better thinker and citizen.

'The Legal Analyst' is not just a law textbook. The subtitle is A toolkit for thinking about the law. These should be reversed. The title of the book should be 'A Toolkit for Thinking' and the subtitle should be 'using examples from the legal system'. The book is an excellent overview of a lot of very important things, such as incentives, thinking at the margin, game theory, the social value of rules and standards, heuristics and biases in human thinking, and the tools of rational thinking. It has the best intuitive explanation of Bayes' Theorem I have ever seen, making this incredibly important mental tool available for everyone's use.

I am very glad that law students are reading 'The Legal Analyst'. They will be much better thinkers as a result. The existence of this book makes me more optimistic about the future of our government and legal system. If the principles outlined here become widely understood, the world will be a better place. This book should be required reading in any course that can get away with assigning it. Anyone who is responsible for writing any kind of regulation or policy, or does economic analysis, needs the information in this book.

'The Legal Analyst' is a very easy book to read, making it even better from a cost-benefit analysis standpoint. I read it a few chapters a time, in my spare time, without any mental effort required. A great deal of high-quality research has been carefully and expertly summarized in clear, vibrant language.

Anyone who has an interest in understanding how the world works, or becoming a more rational thinker, should read 'The Legal Analyst'.

[-][anonymous]20

I read Ferris a while back; I can have a review up in a few days.

I've read Predictably Irrational - not sure if it's worth a full review, in all honesty. It's pretty much just a recounting of various studies that he's done, and what general irrationalities they show. Having said that, I'd be willing to get it out of the library again and do a quick write-up of the studies/biases

Agree with this. If you've been on Less Wrong for a while there's not much new here.