TheAncientGeek comments on Self-Congratulatory Rationalism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (395)
Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.
I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't agree to do so.
Even if they did, it might be consistent with their current actions: if there's a 20% chance of ems and 20% chance of foom (plus 60% chance of unpredictable future, cishuman future, or extinction) we would still need intellectuals and organizations planning specifically for each option, the same way I'm sure the Cold War Era US had different branches planning for a nuclear attack by USSR and a nonnuclear attack by USSR.
I will agree that there are some genuinely Aumann-incompatible disagreements on here, but I bet it's fewer than we think.
So I want to agree with you, but there's this big and undeniable problem we have and I'm curious how you think we should solve it if not through something resembling IQ.
You agree people need to be more charitable, at least toward out-group members. And this would presumably involve taking people whom we are tempted to dismiss, and instead not dismissing them and studying them further. But we can't do this for everyone - most people who look like crackpots are crackpots. There are very likely people who look like crackpots but are actually very smart out there (the cryonicists seem to be one group we can both agree on) and we need a way to find so we can pay more attention to them.
We can't use our subjective feeling of is-this-guy-a-crackpot-or-not, because that's what got us into this problem in the first place. Presumably we should use the Outside View. But it's not obvious what we should be Outside Viewing on. The two most obvious candidates are "IQ" and "rationality", which when applied tend to produce IQ fetishism and in group favoritism (since until Stanovich actually produces his rationality quotient test and gives it to everybody, being in a self-identified rationalist community and probably having read the whole long set of sequences on rationality training is one of the few proxies for rationality we've got available).
I admit both of these proxies are terrible. But they seem to be the main thing keeping us from, on the one side, auto-rejecting all arguments that don't sound subjectively plausible to us at first glance, and on the other, having to deal with every stupid creationist and homeopath who wants to bloviate at us.
There seems to be something that we do do that's useful in this sphere. Like if someone with a site written in ALL CAPS and size 20 font claims that Alzheimers is caused by a bacterium, I dismiss it without a second thought because we all know it's a neurodegenerative disease. But a friend who has no medical training but whom I know is smart and reasonable recently made this claim, I looked it up, and sure enough there's a small but respectable community of microbiologists and neuroscientists investigating that maybe Alzheimers is triggered by an autoimmune response to some bacterium. It's still a long shot, but it's definitely not crackpottish. So somehow I seem to have some sort of ability for using the source of an implausible claim to determine whether I investigate it further, and I'm not sure how to describe the basis on which I make this decision beyond "IQ, rationality, and education".
Well, empirically I did try to investigate natural law theology based on there being a sizeable community of smart people who thought it was valuable. I couldn't find anything of use in it, but I think it was a good decision to at least double-check.
If you think people are too uncharitable in general, but also that we're selectively charitable to the in-group, is that equivalent to saying the real problem is that we're not charitable enough to the out-group? If so, what subsection of the out-group would you recommend we be more charitable towards? And if we're not supposed to select that subsection based on their intelligence, rationality, education, etc, how do we select them?
And if we're not supposed to be selective, how do we avoid spending all our time responding to total, obvious crackpots like creationists and Time Cube Guy?
Yeah, this seems like the point we're disagreeing on. Granted that all proxies will be at least mostly terrible, do you agree that we do need some characteristics that point us to people worth treating charitably? And since you don't like mine, which ones are you recommending?
Not being charitable to people isn't a problem, providing you don't mistake your lack of charity for evidence that they are stupid or irrational.