Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Don't Revere The Bearer Of Good Info

77 Post author: CarlShulman 21 March 2009 11:22PM

Follow-up to: Every Cause Wants To Be A Cult, Cultish Countercultishness

One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.

Specifically, Eliezer's writing at Overcoming Bias has provided nice introductions to many standard concepts and arguments from philosophy, economics, and psychology: the philosophical compatibilist account of free will, utility functions, standard biases, and much more. These are great concepts, and many commenters report that they have been greatly influenced by their introductions to them at Overcoming Bias, but the psychological default will be to overrate the messenger. This danger is particularly great in light of his writing style, and when the fact that a point is already extant in the literature, and is either being relayed or reinvented, isn't noted. To address a few cases of the latter: Gary Drescher covered much of the content of Eliezer's Overcoming Bias posts (mostly very well), from timeless physics to Newcomb's problems to quantum mechanics, in a book back in May 2006, while Eliezer's irrealist meta-ethics would be very familiar to modern philosophers like Don Loeb or Josh Greene, and isn't so far from the 18th century philosopher David Hume.

If you're feeling a tendency to cultish hero-worship, reading such independent prior analyses is a noncultish way to diffuse it, and the history of science suggests that this procedure will be applicable to almost anyone you're tempted to revere. Wallace invented the idea of evolution through natural selection independently of Darwin, and Leibniz and Newton independently developed calculus. With respect to our other host, Hans Moravec came up with the probabilistic Simulation Argument long before Nick Bostrom became known for reinventing it (possibly with forgotten influence from reading the book, or its influence on interlocutors). When we post here we can make an effort to find and explicitly acknowledge such influences or independent discoveries, to recognize the contributions of Rational We, as well as Me.

Even if you resist revering the messenger, a well-written piece that purports to summarize a field can leave you ignorant of your ignorance. If you only read the National Review or The Nation you will pick up a lot of political knowledge, including knowledge about the other party/ideology, at least enough to score well on political science surveys. However, that very knowledge means that missing pieces favoring the other side can be more easily ignored: someone might not believe that the other side is made up of Evil Mutants with no reasons at all, and might be tempted to investigate, but ideological media can provide reasons that are plausible yet not so plausible as to be tempting to their audience. For a truth-seeker, beware of explanations of the speaker's opponents.

This sort of intentional slanting and misplaced trust is less common in more academic sources, but it does occur. For instance, top philosophers of science have been caught failing to beware of Stephen J. Gould, copying his citations and misrepresentations of work by Arthur Jensen without having read either the work in question or the more scrupulous treatments in the writings of Jensen's leading scientific opponents, the excellent James Flynn and Richard Nisbett. More often, space constraints mean that a work will spend more words and detail on the view being advanced (Near) than on those rejected (Far), and limited knowledge of the rejected views will lead to omissions. Without reading the major alternative views to those of the one who introduced you to a field in their own words or, even better, neutral textbooks, you will underrate opposing views.

What do LW contributors recommend as the best articulations of alternative views to OB/LW majorities or received wisdom, or neutral sources to put them in context? I'll offer David Chalmers' The Conscious Mind for reductionism, this article on theistic modal realism for the theistic (not Biblical) Problem of Evil, and David Cutler's Your Money or Your Life for the average (not marginal) value of medical spending. Across the board, the Stanford Encyclopedia of Philosophy is a great neutral resource for philosophical posts.

Offline Reference:

Ross, L. D., Amabile, T. M. & Steinmetz, J. L. (1977). Social roles, social control, and biases in social-perceptual processes. Journal of Personality and Social Psychology, 35, 485-494.

Comments (61)

Comment author: Andy_McKenzie 22 March 2009 04:10:12AM 8 points [-]

Excellent post. Having just read The Adapted Mind (and earlier the moral animal), I can see where Eliezer got a lot of his stuff on evolutionary psychology from.

However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own. I think he in general does a good job of erring on the side of more complexity, which is what I appreciate, so I of course forgive him. :)

A niche that a good author might consider filling is actually including the numbers of the experiments they reference, ie, the experimental scores and their standard errors, etc. It might turn off the innumerate but I think that pure numbers and effect sizes are grossly under reported by science writers.

Comment author: CarlShulman 22 March 2009 04:38:36AM 6 points [-]

"However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own."

I don't need to be appeased, and I strongly endorse the project of providing that introduction. My post was about ways for readers and authors to manage some of the drawbacks of success.

I agree that sample and effect sizes are grossly under-reported, often concealing that an experiment with a sexy conclusion had almost no statistical power, or that an effect was statistically significant but too small to matter. It seems possible for this to become a general science journalism norm, but only if a word-efficient and consistent format for in-text description can be devised, like conflict-of-interest acknowledgments.

Comment author: pnrjulius 28 May 2012 11:21:49PM 0 points [-]

How about this? "The study had 27 participants, 15 women and 12 men. The difference between men and women was on average 2 points on a questionnaire ranging from 0 to 100 points." This clearly explains the (small) sample size and (weak) effect size without requiring any complicated statistics.

Comment author: Benja 16 August 2012 07:10:47PM *  3 points [-]

Actually, it doesn't tell you the effect size, since it doesn't include information about how much individuals in each group differ from each other. If the difference between the group means is 2 points and the standard deviation in each group is 5 points, that's the same effect size (in the technical Cohen sense) as if the difference is 10 points and the standard deviation is 25 points.

I think a useful way to report data like this might be a variation on, "If you chose one of the women and random and one of the men at random, the probability that the woman would have a higher score would be 53%."

Aaaand in order not to completely miss the point of the original article, ETA: I'm not sure how much of that suggestion is my own thinking, but I was certainly influenced by reading about the binomial effect size display which solves a related problem in a similar way, and after I had the idea myself I came across something very similar in Rebecca Jordan-Young's Brainstorm (p.52, and endnote 4 on p.299): mental rotation ability is "considered to be the largest and most reliable gender difference in cognitive ability"; using data from a meta-analysis, she notes that if you tried to guess someone's gender based on their score in a mental rotation test, using your best strategy you'd get it right 60% of the time. (I checked that math a while ago and got the same result, assuming normal distributions with equal variances in each group, with Cohen's d=.56; the meta-analysis is Voyer, Voyer & Bryden, 1995.)

It's annoying that IIRC, "guess the gender" and "in a random pair, who has the higher score" don't give the same number, though. Average readers will probably just see a percentage in each case and derive some measure of affect from the number, whichever interpretation you give.

Comment author: gjm 22 March 2009 10:40:34PM 7 points [-]

The article on theistic modal realism is ingenious. (One-sentence summary: God's options when creating should be thought of as ensembles of worlds, and most likely he'd create every world that's worth creating, so the mere fact that ours is far from optimal isn't strong evidence that it didn't arise by divine creation.)

I don't find the TMR hypothesis terribly plausible in itself -- my own intuitions about what a supremely good and powerful being would do don't match Kraay's -- but of course a proponent of TMR could always just reject my intuitions as I'd reject theirs.

However, I think the TMR hypothesis should be strongly rejected on empirical grounds.

  1. It is notable -- and this is one element of a typical instance of the Argument From Evil -- that our world appears to be governed by a bunch of very strict laws, which it obeys with great precision in ways that make substantial divine intervention almost impossible. It seems that there are many many many more possible worlds in which this property fails than in which it holds, simply because the more scope there is for intervention the more ways there are for things to happen. Therefore, unless the sort of lawlikeness we observe is so extraordinarily valuable that tiny changes in it make a world far less likely to be worth creating, we should expect that "most" worlds in the TMR ensemble would be much less lawlike than ours: e.g., we might expect prayers to be commonly answered in clearly detectable ways. So how come we're in such an atypical world?

  2. Generalizing: I think we should expect that for most measures of goodness X, worlds with higher values of X should be dramatically more numerous in the TMR ensemble unless increasing X reduces the number/measure of possible worlds much more drastically than for most other choices of X. (Because when you increase X, you get the chance to reduce Y or Z or ... a bit. More choices.) Therefore, we should expect that for measures of goodness X where "better" doesn't imply "much more constrained" most worlds (hence, in particular, ours, with high probability) should have values of X that are close to optimal, or at least far from marginally acceptable. This doesn't seem to be true.

It seems to me that counter-arguments to these are likely to be basically the same as counter-arguments to the original argument from evil.

The other thing about TMR is that it undermines any version of theism that expects God to behave as if he cares about us. If TMR is right then, any time God has the option of doing something to make your life better, then he forks the universe vastly many ways and tries out every possible option (including lots of ways of doing nothing, and even ways of deliberately making things worse for you) apart from ones that make the whole universe not worth while. As mentioned above, it seems to me that this should make us expect that visible divine intervention should be pretty common, but in any case it's not terribly inspiring. A bit like having a "friend" who, any time she interacts with you, rolls dice and chooses a random way of behaving subject only to the constraint that it doesn't cause the extinction of all human life. Similarly, you've got no reason to trust any alleged divine revelation unless its wrongness would be so awful as to make the world not worth creating. (These arguments are again closely parallel to ones that come up with the ordinary argument from evil, in response to responses that basically take the form of radical skepticism.)

Comment author: Alexei 06 August 2010 10:02:04PM 2 points [-]

Granted that god exists and cares about us and he can change the world, even in tiny aspects, it's very likely god will use those small aspects as a base to create the perfect world (kind of like AI FOOM). It follows that any world where god has some kind of minimum control will converge to the perfect world. Given that we are not in the perfect world, we can assume god does not have the minimum level of control.

Comment author: bentarm 22 March 2009 10:33:34PM 5 points [-]

Is the idea here to counsel us against some sort of halo effect? Eliezer Yudkowsky has told me a lot of interesting things about heuristics and biases, and about how intelligence works, but I shouldn't let this affect my judgement too much if he recommends a movie?

Or is it more than that - just that I should be careful when reading anything by Eliezer, and take into account the fact that I'm probably slightly too inclined to trust it, because I've liked what came before? Because then of course, we have the issue that I should be more likely to trust an author who is usually right - and this just says that I should be careful not to trust them too much more.

Comment author: ciphergoth 22 March 2009 11:21:25PM 8 points [-]

For much of what EY is setting out, trust isn't an appropriate relationship to have with it. You trust that he's not misrepresenting the research or his knowledge of it, and you have a certain confidence that it will be interesting, so if an article doesn't seem rewarding at first you're more likely to put work in to squeeze the goodness out. But most of it is about making an argument for something, so the caution is not to trust it at all but to properly evaluate its merits. To trust it would be to fail to understand it.

Comment author: CarlShulman 23 March 2009 12:37:25AM 4 points [-]

"Because then of course, we have the issue that I should be more likely to trust an author who is usually right - and this just says that I should be careful not to trust them too much more."

Right.

Comment author: AnnaSalamon 22 March 2009 01:19:50AM *  9 points [-]

How much non-Eliezer stuff is there on the practical "how to" of rationality, e.g. on techniques for improving one's accuracy in the manner that Something to protect, Leave a line of retreat, The bottom line, and taking care not to rehearse the evidence might improve one's accuracy?

EDIT: Sorry to add to the comment after Carl's response. I had the above list in there already, but omitted an "http://", which caused the second half of my sentence to somehow not show up.

Comment author: CarlShulman 22 March 2009 01:23:31AM 6 points [-]

It's balkanized, but a lot of psychologists have written on overcoming the biases they study, e.g. the Implicit Attitude Test researchers suggesting that people keep pictures of outstanding individuals from groups for which they have a negative IAT around, or Cialdini talking about how to avoid being influenced by the social psychology effects he discusses in Influence.

Comment author: AnnaSalamon 22 March 2009 01:29:41AM *  5 points [-]

Okay, yes, I've read some of that. But how much of rationality were you practicing before you ran into Eliezer's work? And where did you learn it? Also, are there other attempts at general textbooks?

Double also, are there sources of rationality "how to" content you'd recommend whose content I probably haven't learned from Eliezer's posts, besides the academic heuristics and biases literature?

Comment author: CarlShulman 22 March 2009 01:37:35AM 8 points [-]

I read decision theory, game theory, economics, evolutionary biology, epistemology, and psychology (including the heuristics and biases program), then tried to apply them to everyday life.

I'm not aware of any general rationality textbooks or how-to guides, although there are sometimes sections discussing elements in guides for other things. There are pop science books on rationality research, like Dan Ariely's Predictably Irrational, but they're rarely 'how-to' focused to the same extent as OB/LW.

Comment author: ciphergoth 22 March 2009 11:52:25AM 1 point [-]

Yes - where else is there an attempt to put together a coherent and in some sense complete account of what rationality is and how to reach it?

Comment author: Eliezer_Yudkowsky 15 September 2012 02:18:33PM 0 points [-]

There's Robyn Dawes's Rational Choice in an Uncertain World which is highly similar in spirit, intent, and style to the kind of defeat-the-bias writing in the Sequences. (And it's quoted accordingly when I borrow something.)

Comment author: Eliezer_Yudkowsky 21 March 2009 11:40:41PM 13 points [-]

I was actually quite amazed to find how far Gary Drescher had gotten, when someone referred me to him as a similar writer - I actually went so far as to finish my free will stuff before reading his book (am actually still reading) because after reading the introduction, I decided that it was important for the two of us to write independently and then combine independent components. Still ended up with quite a lot of overlap!

But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do. Drescher is an AI guy. And the comments on the back of his book seem to indicate that he was writing in a mode that philosophical readers found startling and new.

Drescher is not alternative mainstream philosophy. Drescher is alternative Yudkowsky.

I've referred to Drescher and SEP a few times. The main reason I don't refer more to conventional philosophy is that it doesn't seem very good as a field at distinguishing good ideas from bad ones. If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?

But I've taken into account that Greene was able to rescue Roko where I could not, and I've promoted him on my list of things to read.

Comment author: CarlShulman 21 March 2009 11:55:43PM *  14 points [-]

"But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do."

I agree, and I didn't do so (I used Dennett-type compatibilism in my list of representative views that you conveyed). Even when you do something exceptionally good independently, it can help to defuse affective death spirals to make clear that it's not quite unique.

"If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?"

This is an authorial point of view. Readers need heuristics to confirm that this is what is going on, and not something less desirable, for particular authors and topics. If they can randomly check some of your claims against the leading rival views and see that the latter are weak, that's useful.

Comment author: pjeby 22 March 2009 12:25:51AM 9 points [-]

I don't think I agree with your conclusion. It seems to assume that ideas are somehow representation-independent -- and in practical programming as well as practical psychology, that idea is a non-starter.

Or to put it another way, someone who can state a point more eloquently than its originator knows something that its originator does not. Sure, the communicator shouldn't get all the credit... but more than a little is due.

Comment author: wedrifid 09 August 2010 02:31:49AM 3 points [-]

Or to put it another way, someone who can state a point more eloquently than its originator knows something that its originator does not.

It could be 'How to write eloquently in a context independent manner'.

Comment author: army1987 21 April 2012 09:07:02AM 2 points [-]

Cf. Feynman stating that since he wasn't able to explain the spin--statistics theorem at a freshman level, that meant he hadn't actually fully understood it himself.

Comment author: Eliezer_Yudkowsky 21 March 2009 11:57:30PM 7 points [-]

EDIT: I agree with your conclusion, but...

(Checks Don Loeb reference.)

While, unsurprisingly, we end up adding to the same normality, I would not say that these folks have the same metaethics I do. Certainly Greene's paper title "The Terrible, Horrible, No Good, Very Bad Truth About Morality" was enough to tell me that he probably didn't have exactly the same metaethics and interpretation I did. I would not feel at all comfortable describing myself as a "moral irrealist" on the basis of what I've seen so far.

Drescher one-boxes on Newcomb's Problem, but doesn't seem to have invented quite the same decision theory I have.

I don't think Nick ever claimed to have invented the Simulation Argument - he would probably be quite willing to credit Moravec.

On many other things, I have tried to use standard terminology where I actually agree with standard theories, and provide a reference or two. Where I am knowingly being just a messenger, I do usually try to convey that. But you may be reading too much into certain similarities that also have important points of difference or further development.

EDIT2: I occasionally notice the problem you point to, and write a blog post telling people to read more textbooks. Probably this is not enough. I'll try to reach a higher standard in any canonicalized versions.

Comment author: thomblake 02 April 2009 02:12:49PM 14 points [-]

I think the biggest issue here is your tendency to not cite sources other than yourself, which is an immediate turn-off to academics. To an academic, it suggests the following questions (amongst others): If your ideas are so good, why hasn't anyone else thought of them? Doesn't anyone else have an opinion on this - do you have a response to their arguments? Are you actually doing work in your field without having read enough to cite those who agree or disagree with you?

(I know this isn't a new issue, but it seems it bears repeating.)

Comment author: wedrifid 09 August 2010 02:36:39AM 5 points [-]

Other questions that are implicitly asked:

  • Why are you not signalling in group status?
  • Why are you not signalling alliance with me or my allies by inventing excuses to refer to us?
  • Are you an outsider trying to claim our territory in cognitive space?
  • Are you talking about topics that are reserved for those with higher status in our group than we assign you?
Comment author: steven0461 22 March 2009 01:06:48AM *  5 points [-]

I took a look at Greene's dissertation when Roko started pushing it, but I don't think Greene's views are much like Eliezer's at all. Specifically he doesn't seem to emphasize what Eliezer calls the "subjectively objective" quality of morality, or the fact that people may be mistaken as to what their morality says. Correct me if I'm wrong.

I agree with the rest of the original post.

Comment author: CarlShulman 22 March 2009 01:41:43AM 2 points [-]

I agree about the difference of emphasis but I don't think they have a major substantive disagreement on those issues. You can check with Owain Evans, who knows him.

Comment author: CarlShulman 22 March 2009 12:04:28AM *  3 points [-]

Greene doesn't really think it's horrible, just that people mistakenly think it's horrible and recoil from irrealism about XML 'rightness tags' on actions because they think it would mean that they should start robbing and murdering. Nick does acknowledge Moravec on his website now, after being informed about it (he wasn't aware before that).

Perhaps I shouldn't have covered both being a messenger and acknowledgment of related independent work in the same post.

Comment deleted 22 March 2009 12:26:04AM [-]
Comment author: Eliezer_Yudkowsky 22 March 2009 12:37:36AM 2 points [-]

In this case, I'd actually say email me first with a quickie description.

Comment author: CarlShulman 22 March 2009 12:46:11AM 7 points [-]

Roko exaggerates. It's only 377 pages and written in an accessible style.

It summarizes the ethical literature on moral realism, and takes the irrealist view that XML tags on actions don't exist, and that even if they did exist we wouldn't care about them. It then goes into the psychology literature (Greene does experimental philosophy, e.g. finding that people misinterpret utility as having diminishing marginal utility in contravention to experimental instructions), e.g. Haidt's work on social intuitionism, to explain why it is that we think there are these moral properties 'out there' when there aren't any. Lastly, he argues that we can get on with pursuing our concerns (reasoning about conflicts between our concerns, implications, instrumental questions, etc), but suggests that awareness of the absence of XML tags can help us to better understand and deal with those with differing moral views.

Comment author: Eliezer_Yudkowsky 22 March 2009 01:46:49AM 3 points [-]

people misinterpret utility as having diminishing marginal utility in contravention to experimental instructions

This explains a LOT.

Comment author: CarlShulman 22 March 2009 04:13:48AM 5 points [-]
Comment author: PhilGoetz 22 March 2009 01:17:47AM 0 points [-]

XML tags on actions don't exist, and that even if they did exist we wouldn't care about them.

?

Comment author: Vladimir_Nesov 22 March 2009 01:26:52AM *  6 points [-]

There is no objective truth about which actions are the right ones, no valuation inherent in the actions themselves. And even if there was, even if you could build a right-a-meter and check which actions are good, you won't care about what it says, since it's still you that draws the judgment.

Comment author: MichaelAnissimov 14 May 2009 10:00:48PM 2 points [-]

I recently read Greene's essay and I thought it was a nice buttressing of ideas that I was originally exposed to in 2001 while reading "Beyond anthropomorphism". The challenge with Eliezer's earlier writing is that it is too injected with future shock to be comfortable for most non-transhumanists to read. The challenge with Eliezer's more recent writing is that it is too long for a blog format and much more suited for a book, which forces people to focus on the one thing.

The title of Greene's thesis is tongue-in-cheek. Based on my understanding of Eliezer's conception of morality, I would definitely call him irrealist.

Comment author: MagnetoHydroDynamics 27 May 2012 07:30:24PM 0 points [-]

Based on my understanding of Eliezer's conception of morality, I would definitely call him irrealist.

Well, in as much as Mathematicians are irrealists.

Comment deleted 22 March 2009 12:48:10AM *  [-]
Comment author: Eliezer_Yudkowsky 22 March 2009 03:56:38AM 14 points [-]

No, Enlightenment 2.0 requires rationalist task forces as tightly-knit, dedicated, and fast-responding as religious task forces, better coordinated and better targeted, maybe even more strongly motivated, to do every good thing that religion ever did and more.

IMHO.

Comment author: haig 22 March 2009 10:51:55AM *  3 points [-]

I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.

Comment author: Nominull 21 March 2009 11:33:11PM *  3 points [-]

It's really helpful to have good info borne to me, though, in a readable and engaging fashion. For some reason I never wound up reading the Stanford Encyclopedia of Philosophy, but I did read Eliezer's philosophical zombie movie script.

Comment author: CarlShulman 21 March 2009 11:42:31PM 0 points [-]

You can appreciate the communication of the info, just don't blur your valuation of the info itself with your valuations of the communication and communicator.

Comment author: billswift 22 March 2009 12:14:13AM *  1 point [-]

Just a brief mention since we're supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall's "Beyond AI", the overlap and differences with Eliezer's FAI are very interesting, and it is a very readable book.

EDIT: You all might notice I did write "overlap and differences"; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin's criticisms of Eliezer's FAI.

Comment author: MichaelGR 22 March 2009 07:16:18PM 2 points [-]

I've read it too, but made the mistake of reading it right after Godel, Escher, Bach. Hard to compare.

What surprised me most was how much of the things written in a book published in 2007 were more or less the same as those in a book published in 1979. I expected more new promising developments since then, and that was a bit of a downer.

Comment author: Eliezer_Yudkowsky 22 March 2009 12:24:11AM 1 point [-]

I think I see a lot more difference between my own work and others' work than some of my readers may.

Comment author: PhilGoetz 22 March 2009 12:37:18AM *  5 points [-]

I think that's inevitable, if for no other reason that someone reading two treatments of one subject that they don't completely understand is likely to interpret them in a correlated way. They may make similar assumptions in both cases; or they may understand the one they read first, and try to interpret the one they read second in a similar way.

Comment author: CarlShulman 22 March 2009 12:38:04AM 2 points [-]

Hall gives a passable history of AI, acts as a messenger for a lot of standard AI ideas, including the Dennett compatibilist account of free will and some criticisms of nonreductionist accounts of consciousness, and acts as a messenger for a stew of social science ideas, e.g. social capital and transparent motivations, although the applicability of the latter is often questionable. Those sections aren't bad.

It's only when he gets to considering the dynamics of powerful intelligences and offers up original ideas that he makes glaring errors. Since that's your specialty, those mistakes stand out as horribly egregious, while casual readers might miss them or think them outweighed by the other sections of the book.

I see differences between you and Drescher, or you and Greene, both in substance (e.g. some clear errors in Drescher's book when he discusses the ethical value of rock-minds, neglecting the possibility that happy experiences of others could figure in our utility functions directly, rather than only through game theoretic interactions with powerful agents) and in presentation/formalization/frameworks.

We could try to quantify percentage overlap in views on specific questions.

Comment author: roland 21 March 2009 11:47:36PM 1 point [-]

I think you have some great points here.

Comment author: JulianMorrison 22 March 2009 11:59:23AM 0 points [-]

That pointer to Gary Drescher is much appreciated. Eliezer's explanations about determinism and QM make me feel "aha, now it's obvious, how could it be any other way", but I hate single-sourcing knowledge.

Comment author: gwern 07 January 2013 04:00:15PM 1 point [-]
Comment author: Liron 22 March 2009 06:47:30AM 1 point [-]

The "textbooks" link is broken.

Comment author: CarlShulman 22 March 2009 06:55:19AM 1 point [-]

Fixed.

Comment author: Entraya 20 February 2014 05:52:07PM -2 points [-]

The reason i love Elizier is how many people he must have attracted this art of rationality, and that without him and this site i wouldn't even know where to begin or where to look, and how he is one of surprisingly few people able to convey the information in such tasty little bits. He may not be the smartest in his field, and may 'just' be passing on things he learned from others, but his work is super valuable, for he does what the others don't. Also, Methods of Rationality happens to be on my top 3 list of Greatest Pieces of Writing IMO, so that adds a bit. I just have to separate and clear the lines between the two kinds of reverence, which can be surprisingly difficult if you don't pay attention to it which this post reminded me of, so thanks

Comment author: Lumifer 20 February 2014 07:13:44PM 2 points [-]

The reason i love Elizier

Do you love him enough to spell his name right..?

Comment author: polymathwannabe 20 February 2014 08:13:37PM *  0 points [-]

I Googled "Elizier Yudkowsky" and the first suggestion was an OKCupid profile. Talk about loving him!

Comment author: Vulture 20 February 2014 09:26:09PM -1 points [-]

I might have found your comment witty if you hadn't also downvoted. Don't be a jerk.

(My apologies if it wasn't your downvote. Obviously I'm pretty confident, though)

Comment author: Lumifer 20 February 2014 09:56:48PM 1 point [-]

Sigh. I very rarely downvote any comments in the subthreads I participate in. I did not vote, up or down, on any comment in this subthread.

Want to recalibrate your confidence? :-P

Comment author: Vulture 20 February 2014 09:59:01PM *  0 points [-]

Gladly. In retrospect, my comment was obnoxious even if it had been right. In the future I'll try to realize this without being wrong first. edit: Out of curiousity, why do you usually not vote in threads you're participating in?

Comment author: Lumifer 21 February 2014 07:35:24AM 4 points [-]

why do you usually not vote in threads you're participating in?

If I am already talking to people, I can explain my likes and dislikes in words -- without using the crude tool of votes. For me comments and votes are two alternate ways of expressing my attitude, it is rare that I want to use both.

Besides, it feels more "proper", in the vaguely ethical way, to not up- or down-vote people with whom I am conversing. Not that I think it should be a universal rule, that's just a quirk of mine.

Comment author: Vaniver 21 February 2014 09:39:39PM 0 points [-]

Besides, it feels more "proper", in the vaguely ethical way, to not up- or down-vote people with whom I am conversing. Not that I think it should be a universal rule, that's just a quirk of mine.

I have no qualms against upvoting people that I'm responding to or who have responded to me, but I have a much higher threshold for downvoting responses to my comments and posts, both to try to compensate for the human tendency to get defensive and to increase the probability the conversation is pleasant.

Comment author: TheOtherDave 22 February 2014 12:42:42AM 1 point [-]

Agreed with all of this, but the last bit makes me curious... does downvoting someone who is involved in an exchange with a third party decrease the probability that the conversation is pleasant for the two of them?

Comment author: Vulture 22 February 2014 02:33:00AM 0 points [-]

Well being downvoted, especially when it puts one in the negatives, stirs up bad feelings which might make someone less likely to behave pleasantly in a conversation.

Comment author: Document 01 September 2013 11:54:30PM *  0 points [-]

The Don Loeb and theistic modal realism links are broken. Also, the Stanford Encyclopedia of Philosophy link seems to "point" to a passage from another LW post rather than a URL.

Comment author: MagnetoHydroDynamics 27 May 2012 07:28:59PM 0 points [-]

I have never really regarded EY as anything other than the guy who wrote a bunch of good ideas in one place. The ideas are good on their own merits and after being made aware that Quine(?) invented that "Philosophy = Psychology" thing I have had some healthy 'he's right but probably not original.' And really, who cares? He is right, but don't shoot the messenger, ad hominem is still ad hominem even if it is positive. Empty agreements are as bad as empty dismissals.

Isn't this intuitively obvious? Or am I just very, very rational?

Comment author: Konkvistador 05 October 2011 07:10:24PM 0 points [-]