I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:
When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable.
He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).
This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.
[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]
All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:
"Don't be gullible."
Huh. Sounds a lot more obvious that way.
Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?
Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.
There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.
Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.
This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?
That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?
I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.
I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.
Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.
(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)
Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?
This IMHO works in every culture, Anglo ones including, you just have to ignore the party b...es and go for the intelligent and non-crazy. Usually it means training yourself to be not too focused on cover-girl looks and be okay with stuff like no makeup. As a theoretical example, consider how would you pick up Megan McArdle - she writes, sounds and looks a lot like my past girlfriends, and Suderman looks and sounds broadly like the same kind of guy I am. This just a hunch, though.
However I fully agree that my dating experience in the UK was worse than in Germany, Austria, Hungary, Slovakia or Serbia. (Lived in some places and went to all kinds of meditation camps in the others.) And perhaps it would be worse in the US too. This is largely because I can tolerate things like no make-up, no heels, body hair etc. but I cannot really deal with obesity, and that means playing in a shrinking and increasingly competitive market. Yet, on the whole, my UK experience was not so bad either. On speed dating events in Birmingham, there was a non-fat, intelligent, friendly, considerate 15-20% always.
This is that simple basic Kantian thinking that got deeply incorporated into the cultural DNA of the West centuries ago, this why I don't understand what is in not to understand about. It is about primarily treating people as ends and only secondarily and cautiously as means. It is about understanding humans have a faculty of reason and thus autonomy. What follows from this? Autonomy means people can decide to be different from each other, and thus be really cautious with generalizations and stereotypes - perhaps, cultural ones are still okay, because socialization is a powerful thing, but gender is not a culture. Second, and more important, the ends not means stuff means not seeing sex as a prize to be won by an active, driven men and women just passively hand it out as a reward for the effort, but as an mutually initiated, mutually desired interaction between two autonomous beings with their own desires. It would be useful to read a bit around on the Pervocracy blog about this.
Objectification is not necessarily sexual and it is really an old idea, not some later day SJW fashion. It is treating people as means. Marx argued that in a 19. century factory the proletarian is objectified into being treated like a human machine. This may or may not be true, but an example of the idea. Or if you look at how people realized maybe slavery is not such a good idea, a large part of this was this old Kantian idea that a human should not use a human as a mere tool, without regard to the will of the other human. Rather if we want people to work for us, we should negotiate with them a price on an equal level, acquire consent, and make sure both got our will satisfied in the transaction. This is the same idea. But objectification is gradual, it is not a binary switch - one could argue employment in a hierarchical business is still more so than being an entrepreneur.
An object is simply something that does not have own goals, it is the object of desire, or the tool to achieve other desires with, of other people. If you understand what being a person, what personhood means, well, objectification is just a denial of it.
I must stress it is not some kind of a far-left ideology, it is something a traditional gentleman from 1900 would understand. Persoonhood is a through and through traditional Christian idea, one of the central concepts of Christian philosophy: https://en.wikipedia.org/wiki/Personhood#Christianity and objectification is just whatever denies it. https://en.wikipedia.org/wiki/Objectification
Similarly, I would not say objectifying people is a traditional, conservative thing. Just because feminists fight it it does not mean it is so - reversed stupidity is not intelligence, reversed progressivism is not traditionalism. If you look up Roger Scruton's Right-Hegelian philosophy of sex, it is very decently non-objectifying.
I would say objectification is largely a modern phenomenon, a phenomenon in an age where machines and processes are so predominant that we tend to see people like them, too, and the essence of personhood - intellect and will - gets ignored.
I would also say mass gunpowder armies played an important role in objectifying people.
Sexual objectification is simply a subset of this generic trend.
Another useful resource is existentialists like Sartre, "The Other".
The intelligent asshole will perhaps present a bogus physical theory to gain status - but the arguments will be about a commonly understood, verifiable thing outside himself. But a social theory will not be about a thing, it will be essentially about himself, something only he really knows and we can just guess.
Running good epistemology on human concerns, social concerns is highly desirable but incredibly hard becasue we cannot separate the observer from the observed.
Interestingly, Rothbard and Austrian Economics have something interesting to say here, the limitations of empiricism about people's behavior. You need repeatable experiments. But if you repeat it with different people, that is not really valid because people are far, far too diverse - remember, autonomy. It is simply wrong in principle to treat beings with intellect and will fungible. If I repeat a behavior experiment with two different groups of people and get something like 62% an 65% do X then of course that means something, but it is not, strictly speaking, the repetition of the experiment. If you repeat it with the same people, you find they learned from the previous experiment rendering the experiment less valid, because not really repeated the same way. So basically we cannot, without brainwashing, repeat experiments in human behavior. Nevertheless at the end of the day we still run experiments with human behavior because just what else can one do? We work with what we have. But the confidence in these things should always necessarily be far lower, for these reasons. The strict repetition criteria is never satisfied.
Just a hunch but I suspect Megan McArdle would not be doing speed dating.
Except the... (read more)