Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way
I would have it, and I don't find it great. Why should baking be an individual effort? Teamwork is better. It should be seen as "here, if you like it, help me bake it". That is why it is Discussion, not Main. I think a good way to use this site setup would be to throw half-baked things into Discussion, if it sounds interesting cooperate on baking it, then when done promote to Main. Really, why don't we do this?
All the great articles in the past, LW 2007-2010 look a lot like individual effort. Why should it be so?
Is this a bit Silicon Valley Culture? Because those guys do the same - they have a software idea and work on it individually or with 1-2 co-founders. Why? Why not start an open source project and invite contributors from Step 1? Why not throw half-made ideas out in the wild and encourage others to work on them to finish them? Assuming you are not after the money but after a solution you yourself would use, of course - "scratch your own itch" is a good idea in open source.
This kind of individual-effort culture sounds a lot like a culture where insights are in abundance but working on them is scarce, so people don't value much insights from others as long as they are not properly worked out. Well, I should say I am pretty much used to the opposite, most folks I know just work routine and hardly any reflection at all...
I disagree with the premise that LW tears half-baked ideas to shreds. My experience (which, admittedly is limited to open threads) is that you'll be fine if you're clear that what you're presenting a work in progress, and you don't overreach with your ideas.
By overreach, I mean something like this:
This is an attempt to solve happiness. Several factors, such as health, genetics, and social environment, affect happiness. So happiness = healthgeneticssocial environment.
You can see what's wrong with the post above. It's usually not this blatant, but I see this sort of thing too often, and they are invariably ripped to shreds. On the other hand, something like this:
This is an attempt to solve happiness. First, I'd like to identify the factors that affect happiness. I can think of health, genetics, and social environment. Can we break this down further? Am I missing any important factors?
Probably won't be ripped to shreds. It has it's fair share of problems, so I wouldn't expect an enthusiastic response from the community, but it won't be piled upon either.
Frankly speaking, the first type of post reeks of cargo cult science (big equations, formal style (often badly executed), and references that may or may not help the reader). I'm not too unhappy to see those posts being ripped to shreds.
The aspect of taking ideas seriously that you are talking about seems orthogonal to forming beliefs. It's about initiative in investigating ideas and considering their general applicability, as opposed to stopping at a few superficial observations or failing to notice their relevance in unusual contexts. You don't need to believe an idea to investigate it in detail, the belief may come eventually or not at all. Considering an idea in many contexts may also blur the line with believing it. (Another aspect is taking action based on a belief.)
The process of investigating ideas in detail might get triggered by believing them for no good reason, but there is no need.
A way to deal with this is to learn to notice the situation where you are likely being gullible. To quote the classic poker proverb,
If after ten minutes at the poker table you do not know who the patsy is—you are the patsy.
Maybe you feel out of your depth, maybe a great guru gives a very convincing sermon or writes something controversial but convincing on their blog/forum and wants you to follow/donate/carry the word. Or maybe a car salesperson suggests this great deal only available today. What are the symptoms of being out of your depth and likely to be taken advantage of?
I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth.
I always feel so.
I see a lot of rational sounding arguments from red-pillers, manosphericals, conservatives, reactionaries, libertarians, the ilk. And then I see the counter-arguments from liberals, feminists, leftists and the ilk that pretty much boil down to the other side just being uncompassionate assholes and desperately rationalizing it with arguments. Well, rationalizing is a ...
I'm not sure if I'm understanding you correctly, but the reason why climate forecasts and meterological forecasts have different temporal ranges of validity is not that the climate models are coarser, it's that they're asking different questions.
Climate is (roughly speaking) the attractor on which the weather chaotically meanders on short (e.g. weekly) timescales. On much longer (1-100+ years) this attractor itself shifts. Weather forecasts want to determine the future state of the system itself as it evolves chaotically, which is impossible in principle after ~14 days because the system is chaotic. Climate forecasts want to track the slow shifts of the attractor. To do this, they run ensembles with slightly different initial conditions and observe the statistics of the ensemble at some future date, which is taken (via an ergodic assumption) to reflect the attractor at that date. None of the ensemble members are useful as "weather predictions" for 2050 or whatever, but their overall statistics are (it is argued) reliable predictions about the attractor on which the weather will be constrained to move in 2050 (i.e. "the climate in 2050").
It's analogous to th...
Michael Smith touched on this in his keynote talk at LWCW last weekend. Don't believe something just because you've heard a good argument for it, he said (I think, reconstructing from memory, and possibly extrapolating as well). If you do that, you'll just change your mind as soon as you encounter a really good argument for the opposite (the process Yvain described). You don't really know something until you've reached the state where the knowledge would grow back if it was deleted from your mind.
...Post something half-baked on LW and you will be torn to
Another good post of Scott's comes to mind, where he writes:
...Suppose there are two sides to an issue. Be more or less selfish. Post more or less offensive atheist memes. Be more or less willing to blame and criticize yourself.
There are some people who need to hear both sides of the issue. Some people really need to hear the advice “It’s okay to be selfish sometimes!” Other people really need to hear the advice “You are being way too selfish and it’s not okay.”
It’s really hard to target advice at exactly the people who need it. You can’t go around giving e
This is a great and very precise elaboration of lightness and how to deal with evidence and I think the idea to start gullible is very practical advice - esp. for aspiring rationalists. I do think that it needs some kind of disclaimer, some limitiation or sanity check to avoid going into some affective death spiral around some convincing looking but actually self sealing mind trap.
For me I can very much relate to your exposition. Apparently I'm also kind of advanced gullible. I'm told that I accept new ideas and concepts (new to me) too easily. I'm by now ...
You might like this: http://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1004&context=eng_faculty_pubs
Someone else posted it to this site originally, I have no recollection who, but we are all indebted to them.
Most people are binary about beliefs. Either they believe X is true or they believe X is false. When talking with LW people you find people saying: "I think X is likely but I don't think it's certain".
If your goal is to get to the right shade of gray, then you need to change your beliefs a lot.
It's likely easier to convince me that P(X)~0.10 instead of P(X)~0.001 while at the same time it's harder to convince me to go from P(X)~0.90 to P(X)~0.999
yeah being open to ideas is sometimes the only way to go. its okay to take up contrary ideas if you trust in some process of resolution that will happen. at first you wouldn't trust this to happen, and you may want to force the resolution. but even just working on math problems this can be the wrong route. sometimes you need to give it time and have patience. ill take the math problem analogy a little further. just like there can be different takes on issues, you could see different methods to approach a problem. they might both seem promising, but neithe...
I think there's always been something misleading about the connection between knowledge and belief. In the sense that you're updating a model of the world, yes, "belief" is an ok way of describing what you're updating. But in the sense of "belief" as trust, that's misleading. Whether one trusts one's model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.
IOW, knowledge is not a modification of a psychological state, it's the actual, objective status of an "aperiodic cry...
The way I've been framing this in my head is that there is a tendency towards having either universally too strong priors or universally too weak priors. It seems almost like strength with which to believe things is a personality trait.
I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:
He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).
This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.
[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]
All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:
"Don't be gullible."
Huh. Sounds a lot more obvious that way.
Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?
Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.
There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.
Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.
This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?
That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?
I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.
I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.
Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.
(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)
Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?