Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way
I would have it, and I don't find it great. Why should baking be an individual effort? Teamwork is better. It should be seen as "here, if you like it, help me bake it". That is why it is Discussion, not Main. I think a good way to use this site setup would be to throw half-baked things into Discussion, if it sounds interesting cooperate on baking it, then when done promote to Main. Really, why don't we do this?
All the great articles in the past, LW 2007-2010 look a lot like individual effort. Why should it be so?
Is this a bit Silicon Valley Culture? Because those guys do the same - they have a software idea and work on it individually or with 1-2 co-founders. Why? Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them? Assuming you are not after the money but after a solution you yourself would use, of course - "scratch your own itch" is a good idea in open source.
This kind of individual-effort culture sounds a lot like a culture where insights are in abundance but working on them is scarce, so people don't value much insights from others as long as they are not properly worked out. Well, I should say I am pretty much used to the opposite, most folks I know just work routine and hardly any reflection at all...
I disagree with the premise that LW tears half-baked ideas to shreds. My experience (which, admittedly is limited to open threads) is that you'll be fine if you're clear that what you're presenting a work in progress, and you don't overreach with your ideas.
By overreach, I mean something like this:
This is an attempt to solve happiness. Several factors, such as health, genetics, and social environment, affect happiness. So happiness = healthgeneticssocial environment.
You can see what's wrong with the post above. It's usually not this blatant, but I see this sort of thing too often, and they are invariably ripped to shreds. On the other hand, something like this:
This is an attempt to solve happiness. First, I'd like to identify the factors that affect happiness. I can think of health, genetics, and social environment. Can we break this down further? Am I missing any important factors?
Probably won't be ripped to shreds. It has it's fair share of problems, so I wouldn't expect an enthusiastic response from the community, but it won't be piled upon either.
Frankly speaking, the first type of post reeks of cargo cult science (big equations, formal style (often badly executed), and references that may or may not help the reader). I'm not too unhappy to see those posts being ripped to shreds.
The aspect of taking ideas seriously that you are talking about seems orthogonal to forming beliefs. It's about initiative in investigating ideas and considering their general applicability, as opposed to stopping at a few superficial observations or failing to notice their relevance in unusual contexts. You don't need to believe an idea to investigate it in detail, the belief may come eventually or not at all. Considering an idea in many contexts may also blur the line with believing it. (Another aspect is taking action based on a belief.)
The process of investigating ideas in detail might get triggered by believing them for no good reason, but there is no need.
A way to deal with this is to learn to notice the situation where you are likely being gullible. To quote the classic poker proverb,
If after ten minutes at the poker table you do not know who the patsy is—you are the patsy.
Maybe you feel out of your depth, maybe a great guru gives a very convincing sermon or writes something controversial but convincing on their blog/forum and wants you to follow/donate/carry the word. Or maybe a car salesperson suggests this great deal only available today. What are the symptoms of being out of your depth and likely to be taken advantage of?
I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth.
I always feel so.
I see a lot of rational sounding arguments from red-pillers, manosphericals, conservatives, reactionaries, libertarians, the ilk. And then I see the counter-arguments from liberals, feminists, leftists and the ilk that pretty much boil down to the other side just being uncompassionate assholes and desperately rationalizing it with arguments. Well, rationalizing is a ...
I'm not sure if I'm understanding you correctly, but the reason why climate forecasts and meterological forecasts have different temporal ranges of validity is not that the climate models are coarser, it's that they're asking different questions.
Climate is (roughly speaking) the attractor on which the weather chaotically meanders on short (e.g. weekly) timescales. On much longer (1-100+ years) this attractor itself shifts. Weather forecasts want to determine the future state of the system itself as it evolves chaotically, which is impossible in principle after ~14 days because the system is chaotic. Climate forecasts want to track the slow shifts of the attractor. To do this, they run ensembles with slightly different initial conditions and observe the statistics of the ensemble at some future date, which is taken (via an ergodic assumption) to reflect the attractor at that date. None of the ensemble members are useful as "weather predictions" for 2050 or whatever, but their overall statistics are (it is argued) reliable predictions about the attractor on which the weather will be constrained to move in 2050 (i.e. "the climate in 2050").
It's analogous to th...
Michael Smith touched on this in his keynote talk at LWCW last weekend. Don't believe something just because you've heard a good argument for it, he said (I think, reconstructing from memory, and possibly extrapolating as well). If you do that, you'll just change your mind as soon as you encounter a really good argument for the opposite (the process Yvain described). You don't really know something until you've reached the state where the knowledge would grow back if it was deleted from your mind.
...Post something half-baked on LW and you will be torn to
Another good post of Scott's comes to mind, where he writes:
...Suppose there are two sides to an issue. Be more or less selfish. Post more or less offensive atheist memes. Be more or less willing to blame and criticize yourself.
There are some people who need to hear both sides of the issue. Some people really need to hear the advice “It’s okay to be selfish sometimes!” Other people really need to hear the advice “You are being way too selfish and it’s not okay.”
It’s really hard to target advice at exactly the people who need it. You can’t go around giving e
This is a great and very precise elaboration of lightness and how to deal with evidence and I think the idea to start gullible is very practical advice - esp. for aspiring rationalists. I do think that it needs some kind of disclaimer, some limitiation or sanity check to avoid going into some affective death spiral around some convincing looking but actually self sealing mind trap.
For me I can very much relate to your exposition. Apparently I'm also kind of advanced gullible. I'm told that I accept new ideas and concepts (new to me) too easily. I'm by now ...
You might like this: http://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1004&context=eng_faculty_pubs
Someone else posted it to this site originally, I have no recollection who, but we are all indebted to them.
Most people are binary about beliefs. Either they believe X is true or they believe X is false. When talking with LW people you find people saying: "I think X is likely but I don't think it's certain".
If your goal is to get to the right shade of gray, then you need to change your beliefs a lot.
It's likely easier to convince me that P(X)~0.10 instead of P(X)~0.001 while at the same time it's harder to convince me to go from P(X)~0.90 to P(X)~0.999
yeah being open to ideas is sometimes the only way to go. its okay to take up contrary ideas if you trust in some process of resolution that will happen. at first you wouldn't trust this to happen, and you may want to force the resolution. but even just working on math problems this can be the wrong route. sometimes you need to give it time and have patience. ill take the math problem analogy a little further. just like there can be different takes on issues, you could see different methods to approach a problem. they might both seem promising, but neithe...
I think there's always been something misleading about the connection between knowledge and belief. In the sense that you're updating a model of the world, yes, "belief" is an ok way of describing what you're updating. But in the sense of "belief" as trust, that's misleading. Whether one trusts one's model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.
IOW, knowledge is not a modification of a psychological state, it's the actual, objective status of an "aperiodic cry...
The way I've been framing this in my head is that there is a tendency towards having either universally too strong priors or universally too weak priors. It seems almost like strength with which to believe things is a personality trait.
I liked your description of certain unconventional schools of thought as "tough-minded" and "creative." Tough-minded, creative thought processes will often involve concepts and metaphors that make people uncomfortable, including the people who think them up.
Sometimes, understanding the behavior of large groups of people involves concepts or metaphors that would be unhealthy to apply at the individual level. For instance, you can learn a lot about human behavior by thinking about game theory and the Prisoner's Dilemma. This does not mean that you need to think about other people as "prisoners," or think about your interactions with them as a "game" or as a "dilemma."
I think you probably do have a lot of differences in values from people who are “red-pillers, manosphericals, conservatives, reactionaries, libertarians,” but I think this case is really just about inferential distance on the object-level. Although “sexual access” has potential problematic connotations, it actually accurately describes situations where some people’s dating challenges are so great that they are effectively excluded. I apologize for the length this post will be, but I want to drop down to the object-level for a while to give you sufficient evidence to chew on:
Demographics: sex ratio and operational sex ratio have a gigantic influence on society. Exhibit A: China has a surplus of men. Exhibit B: The shortage of black men due to imprisonment turns dating upside-down in the black community and causes black women to compete fiercely for black men. Exhibit C: In virtually all US cities (not just the West Coast), there are more single men than women below age 35 (scroll down for the age breakdown or use the sliders). Young men face a level of competition than young women do not.
If something like 120 men are competing for 100 women, in the system if monogamous, then 20 of those men are going to be excluded from marriage. Yes, in some sense, all 120 have an "opportunity," but we know that under monogamy, 20 of them will be left out in the cold. And under a poly system, the results will be even worse, because humans are more polygynous than polyandrous. When low-status men are guaranteed to lose out in dating and marriage due to an unfavorable sex ratio, then that starts looking like a lack of "access."
Let's talk about polygyny a bit more. A recent article defended gay marriage from the charge of opening up the door to polygamy:
Here's the problem with it: when a high-status man takes two wives (and one man taking many wives, or polygyny, is almost invariably the real-world pattern), a lower-status man gets no wife. If the high-status man takes three wives, two lower-status men get no wives. And so on.
This competitive, zero-sum dynamic sets off a competition among high-status men to hoard marriage opportunities, which leaves lower-status men out in the cold. Those men, denied access to life's most stabilizing and civilizing institution, are unfairly disadvantaged and often turn to behaviors like crime and violence. The situation is not good for women, either, because it places them in competition with other wives and can reduce them all to satellites of the man.
I'm not just making this up. There's an extensive literature on polygamy.
And there's that word again: "access." The notion of men being shut out of dating under polygyny mating appears in an entirely mainstream and liberal source. There are also concepts like “high-status” and “low-status” males, which feminists would often object to in other contexts.
Cultural forces: the quality of information about dating for introverted men is so poor that it is actively damaging and has the effect of excluding them from dating. There is also a decline in socialization and institutions around dating. For evidence, it is sufficient to look at the existence of the PUA community. Look at hookup culture on college campuses. In a healthy society, with healthy socialization and a monogamous mating system, we wouldn't even be having this conversation because many of the same men in the manosphere or PUA community would be too busy hanging out with their girlfriends or wives to be complaining on the internet.
Legal and economic forces: In some Asian countries, women’s minimum expectations for husbands involves buying a house with multiple bedrooms, and only some men can economically afford that; the rest lack access to marriage because they lack the economic prerequisites. In many Western countries, if men get divorced, they can face such punishing child support and alimony burden that they must move to a small apartment (or even end up in debtor’s prison if they can’t pay). These men face steep challenges in attracting future girlfriends and wives due to their economic dispossession.
As I’ve shown at the object level, there are large cultural, demographic, economic, and legal forces that influence how challenging dating is and how people behave. These problems are much larger than asshole men blaming women for not putting out. Lack of “sexual access” is an entirely reasonable way to describe what happens to men under a skewed operational sex ratio or polygyny, though I would be totally fine to try other terms instead. I realize the term isn’t perfect, and that some people who use it might have objectionable beliefs, but if we give into crimestop and guilt-by-association, then we would know a lot less about the world.
On one side, I see people who are high-status, intellectual, and look really nice and empathic and compassionate. Of course my instincts like that. On the other side, I see people who look brave, tough, critical-minded and creative, plus they seem to be far more historically literate, so basically NRx and libertarians and similar folks give me that kind of "inventor" vibe, which incidentally is also something my instincts like.
So, basically, there are two groups of people with grievances. The ingroup is very good at impression management and public relations. The outgroup is bad at impression management, but your gut is telling you that they might be on to something. Yet you are suspicious of some of the outgroup’s arguments, because the ingroup says that the outgroup is just a bunch of “smart assholes,” and because the outgroup’s claims have problematic connotations in the outgroup’s moral framework.
I don’t think your reaction is unreasonable given your vantage point and level of inferential distance from the outgroup. But note that there is a strong incentive for the ingroup to set an incredibly high bar for the moral acceptability of the outgroup’s grievances, so it’s necessary to apply a healthy degree of skepticism to the ingroup’s moral arguments unless you have confirmed them independently.
In some cases, we will have to go to the object-level to discover which group is the “smart assholes” who are confabulating. Of course both groups will try to tar the others’ motives and reputations, but the seeming victor of that conflict will be the group with the best public relations skills, not necessarily the group with the more accurate views.
If your gut is telling you that there is potential truth in the outgroup’s arguments, then don’t let the ingroup’s moral framework shut down your investigation, especially when that investigation has implications for whether the ingroup’s moral framework is any good in the first place. Otherwise, you risk getting stuck in an closed loop of belief. I think the same argument applies to one’s own moral framework, also.
For instance, you can learn a lot about human behavior by thinking about game theory and the Prisoner's Dilemma. This does not mean that you need to think about other people as "prisoners," or think about your interactions with them as a "game" or as a "dilemma."
The issue is that the Prisoner's Dilemma doesn't seem to predict human behavior in modern society well.Partially because it is the kind of tough situation that is uncommon now - this is a bit similar to the SSC's thrive-vs-survive spectrum. All this tough-minded rig...
I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:
He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).
This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.
[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]
All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:
"Don't be gullible."
Huh. Sounds a lot more obvious that way.
Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?
Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.
There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.
Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.
This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?
That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?
I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.
I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.
Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.
(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)
Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?