Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Goals for which Less Wrong does (and doesn't) help

56 Post author: AnnaSalamon 18 November 2010 10:37PM

Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.

In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”.  He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.  

So far, excellent news, right?  A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future.  Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.

The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”.  Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments.  But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.  And they were the sort of errors LW could have helped with.  And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.

So, in case it’s useful to others, a review of what LW is useful for.

When you do (and don’t) need epistemic rationality

For some tasks, the world provides rich, inexpensive empirical feedback.  In these tasks you hardly need reasoning.  Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.

Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it.  Better to find some clay and a hands-on sculpting course.  The situation is similar for small talk, cooking, selling, programming, and many other useful skills.

Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better.  If you miss your first bet, you’re out for good.

There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years.  For these goals we are forced to use reasoning, as failure-prone as human reasoning is.  If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.

The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as "good" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]

How to develop epistemic rationality

If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find.  For many, although not all, this will mean Less Wrong.  Read the sequences.  Read the top current conversations.  Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards.  Find or build an in-person community of aspiring rationalists if you can.

Is it useful to try to read every single comment?  Probably not, on the margin; better to read textbooks or to do rationality exercises yourself.  But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.

 


[1]  To be specific: Jim was considering personally "raising awareness" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.

There are several difficulties with this plan.  For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.

For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give.  (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.) 

The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.

[2] The traditional rationalist reply would be that if human reasoning is completely and permanently hopeless when divorced from the simple empirical tests of Popperian science, then avoiding such "unscientific" subjects is all we can do.

Comments (100)

Comment author: Vladimir_Golovin 19 November 2010 10:45:53AM *  17 points [-]

But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.

Idea: a Rationalist Counsel, basically a group of high-profile rationalists that help people who are dedicated themselves to "shaping the future of humanity in a positive way" accurately assess their abilities and offer good strategies to leverage these abilities in order to maximize the impact.

People would privately submit their resumes to the Counsel members, who will evaluate them and offer a personal strategy that, in their opinion, would maximize the impact.

Comment author: Kaj_Sotala 20 November 2010 10:22:35AM 8 points [-]

The proper name for this organization, of course, is the Bayesian Conspiracy.

Comment author: CarlShulman 19 November 2010 07:05:37PM 7 points [-]

Already exists in embryonic form: http://www.xrisknetwork.com/

Comment author: JamesAndrix 19 November 2010 01:33:34PM 0 points [-]

Upvoted because this shouldn't be at -1

Comment author: teageegeepea 19 November 2010 04:12:37AM 14 points [-]

It seems to me that most of the "raise awareness" campaigns are for things people are plenty aware of already.

Comment author: JGWeissman 19 November 2010 04:24:50AM 13 points [-]

Indeed. Maybe we should instead call them "signaling awareness" campaigns. I wonder how many people would still be interested in participating.

Comment author: John_Maxwell_IV 05 August 2012 12:16:00AM 2 points [-]

But you're only talking about those campaigns that you're already aware of! There may be lots of unknown "raise awareness" campaigns that will find the awareness of their issues boosted if they work hard enough.

Comment author: Craig_Heldreth 21 November 2010 03:19:06PM 9 points [-]

Consider the following goals:

  1. to be smarter (i.e. more rational);
  2. to appear smarter;
  3. to feel smarter.

And construct a simple Venn diagram. Carefully working through all of Donald Knuth The Art of Computer Programming or Landau & Lifshitz Course of Theoretical Physics (10 volumes) would be a task right smack dab in the center of such a Venn diagram passing the acid test of items one through three on this list.

What generates the criticism of LessWrong as "shiny"?

I think many humans are susceptible to a trap of expending energy on #2 or on #3 and pretending they are working on #1. LessWrong could be seen as "shiny" if it is a particularly attractive trap of this nature. Hence, caution may be in order. The sequences are long and may be looked at as quality entertainment that is no substitute for Knuth or Landau, Lifshitz.

It is much higher quality entertainment than Grant Morrison comic books. If you were going to put that LessWrong time into Grant Morrison, then reading LessWrong is pure win. If it is distracting you from Knuth or Landau, Lifshitz, then perhaps that is where the "shiny" criticism comes from.

Comment author: John_Maxwell_IV 24 November 2010 08:28:33AM 4 points [-]

It doesn't seem obvious to me that reading about solved problems in computer programming and theoretical physics would develop one's rationality faster than reading a blog that purportedly develops rationality techniques.

And addition to the time costs of an activity there are also willpower costs. If my primary goal is to chill out in the evening after a day of school and software development, reading a difficult textbook may not help me achieve that goal. Skimming Less Wrong may help me achieve that goal and also gain me side benefits.

Comment author: Johnicholas 24 November 2010 03:44:11PM 1 point [-]

"Carefully working through" is much different than "reading".

Most of the time spent "carefully working through" is spent solving problems - both the ones inline in the text, and additional ones that you spontaneously think of when doing that kind of work.

Comment author: Nisan 24 November 2010 05:46:39PM 13 points [-]

Indeed. Just reading The Art of Computer Programming would be a pointless task. Incidentally, this wisdom is so obvious in academia that when people say "I read X" they really mean something like "I took notes and worked out most of the exercises". When they want to convey that they just read the text, they say "I looked at X". This language definitely misled me when I was an undergrad.

Comment author: Craig_Heldreth 24 November 2010 02:03:07PM 0 points [-]

I do not disagree with you at all. My point is not that the criticism of LessWrong as "shiny" is accurate, merely giving my take on the viewpoint this criticism comes from.

Comment author: billswift 19 November 2010 03:45:35AM 6 points [-]

Conjunctions are not inherently unlikely; they are less likely than their conjuncts considered separately, but could easily be much more likely than a different argument.

Comment author: shokwave 19 November 2010 03:10:45PM *  7 points [-]

Well, yes. But they are less likely than their conjuncts in a specific and mathematical way, and we have good evidence that people don't multiply their uncertainties the way they should - it appears that they simply take the average (!!!).

Charitably, I count eight conjunctions in the presented argument. If he had on average 80% confidence in each premise (raising awareness of the free market virtues will overcome status quo bias, increase in free market in first world will translate to increase in free market in the third world - these don't feel like four-in-five-timers), then his plan, as stated, has at most a 17% chance of success. But Jim feels like he has an 80% chance.

Your response is true in a trivial way, because 17% is far higher than the chance Zeus returns, and far higher again than Zeus and Jesus returning to give each other a cosmic high-five. But we can spot those very unlikely premises - and it's only the very unlikely premises that are less likely than a long list of conjunctions. We don't think like that - we don't see our true chances.

So, if you restrict the space of premises and arguments to what humans mostly deal with in their practical lives, "conjunctions are inherently unlikely" is an excellent rule of thumb until you can sit down and do the math.

Comment author: billswift 19 November 2010 06:06:38PM *  2 points [-]

What you write is true. But I have seen people go the other way - hear about some problem (such as the Conjunction Fallacy), then start over-compensating for it (for example, by always rating conjunctions as lower probability). Since the post as written wasn't entirely clear about the limits, I was just pointing out that automatically down-rating conjunctions is not always advisable.

I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.

Comment author: shokwave 20 November 2010 06:18:56AM *  0 points [-]

I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.

That is a good grounding for applying the conjunction fallacy! Even half a second spent deciding whether your argument is 'reliable' according to methods you have for estimating reliability might stop you from motivated cognition in the direction of "my argument is right". Makes me wonder what other real-life problems have a similar enough structure to common biases to help with instrumental rationality.

Comment author: JGWeissman 19 November 2010 02:05:19AM 6 points [-]

Programming does indeed have aspects in which you get lots of immediate feed back so that you can be successful with minimal rationality. But in many cases, getting that feed back is itself a skill that requires abstract reasoning (What corner cases should I be testing?). Other aspects of programming in which feedback can be slow or expensive include: How maintainable is this code? Will other programmers understand how to use my library? How will this program perform on large problems? How could an unauthorized user abuse this program?

Comment author: XiXiDu 19 November 2010 11:40:13AM 4 points [-]

The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak...

There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under. Regarding retirement investment, just ask your parents or grandparents. Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.

"I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified." Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do? This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

The basic problem here is that the necessary education to follow Less Wrong will not only teach me to be wary of the arguments on Less Wrong but will also preclude me to act on the suggestions. How so? The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong. Consequentially this means that someone like me should care to earn enough money to support friendly AI research and to buy a Cryonics contract. But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

Comment author: AnnaSalamon 19 November 2010 12:51:44PM 8 points [-]

There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under.

Yes, but it is less obvious that the chance is large enough to be worth the money.

Regarding retirement investment, just ask your parents or grandparents.

My example was that the type of investments that can be relied upon in coming decades is not accessible that way. For example, many in the US trusted their savings in real estate, which had been trustworthy for generations. And then it wasn’t.

Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.

Yes, there is a difference of degree between the difficulty of figuring out whether mortage-backed securities were as trustworthy as people thought (note that this was less obvious beforehand) and the difficulty of thinking non-nonsensically about the impacts of AI on our future. Nonetheless, they both seem sufficiently difficult that their practice is helped by the explicit study of rationality (so that e.g. heuristics and biases, and probability theory, are explicitly studied by many in finance).

Comment author: XiXiDu 19 November 2010 04:40:23PM 2 points [-]

What I said was rather meant to show that there are obvious reasons for which you might want to care about your retirement plan. What's very different is to predict that you shouldn't care about your retirement plan because either we'll be killed by superhuman AGI or join utopia as immortals. Less Wrong seems to be focused on the predictive nature of probability theory and to take ideas serious. I don't think that this is a good approach for any but the most intelligent and educated individuals and organisations. The traditional approach to rely on empirical data and the judgement of experts is to be favored for most people in my opinion.

Comment author: wedrifid 19 November 2010 06:17:04PM *  4 points [-]

Less Wrong seems to be focused on the predictive nature of probability theory and to take ideas serious. I don't think that this is a good approach for any but the most intelligent and educated individuals and organisations. The traditional approach to rely on empirical data and the judgement of experts is to be favored for most people in my opinion.

It is interesting to note that the latter is actually a form of the former, and a particularly strong one at that! In fact, the reasoning that you are using here is using the predictive nature of probability theory.

(That said, I agree that for most people biting the bullet when it comes to their abstract cognitions would be disastrous. Explotions, martyrs and deaths by stoning would abound. That and people would actually act on terrible dating advice rather than their instincts.)

Comment author: wedrifid 19 November 2010 05:36:06PM 4 points [-]

The basic problem here is that the necessary education to follow Less Wrong will not only teach me to be wary of the arguments on Less Wrong but will also preclude me to act on the suggestions. How so? The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong. Consequentially this means that someone like me should care to earn enough money to support friendly AI research and to buy a Cryonics contract. But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

This seems to be nonsensical. In most of the relevant cultures of the readership one need not direct overwhelming amounts of one's time to acquiring resources for cryonics membership.

There is always going to be a trade off between spending time deciding what is the best thing to do and actually doing it. If you think you are best served doing personal development and educating yourself so that you can best direct your other efforts then do so. If not, don't. It doesn't seem to be a lesswrong specific problem.

Comment author: AnnaSalamon 19 November 2010 01:04:16PM *  4 points [-]

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do? This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

You should interpret the above quote as an example of a single statement made by a single person. The claim is not that you should read Less Wrong so that you can believe every single statement Eliezer ever made. The claim is that you should interact with the best aspiring rationalists you can find (which for many will mean Less Wrong, although there are plenty of others out there too) so that you can learn to think carefully yourself. That is, so that you can learn to consider issues one by one, to apply the Power of Positivist Thinking, to apply the same standards to claims you like and to claims you dislike, etc.

The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong.

Even if you’re correct that there’s wide agreement on those two points, it doesn’t follow that those two points are the most useful thing Less Wrong can give you. Less Wrong can not only give you claims labeled “true”, but can also improve your skill at deciding which claims are likely to be true.

But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

In my experience, a relatively small amount of study (reading through the sequences and some similar material, and practicing the relevant subskills in in-person or online conversations) can significantly boost many folks’ epistemic rationality. This amount of study is compatible with earning a reasonable sum.

I’m a bit confused about what you’re trying to do in this comment. Are you curious, and honestly trying to untangle the issues and consider the evidence for and against each claim? Or what project are you engaged in?

Comment author: XiXiDu 19 November 2010 02:31:51PM *  5 points [-]

The claim is that you should interact with the best aspiring rationalists you can find.

And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this but did provide no evidence for outsiders to follow through on it. The only exceptional and novel thesis to be found on LW concerns decision theory which is not only buried but which one is not able to judge if it is actually valuable as long as one doesn't have a previous education. The only exceptional and novel belief (prediction) on here is that regarding the risks posed by AGI. As with the former, one is unable to judge any claims as long as one does not read the sequences (as it is claimed). But why would one do so in the first place? Outsiders are unable to judge the credence of this movement except by what their members say about it. This is my problem when I try to introduce people to Less Wrong. They don't see why it is special! They skim over some posts and there's nothing new there. You have to differentiate Less Wrong from other sources of epistemic rationality. What novel concepts are to be found here, what can you learn from Less Wrong that you might not already know or that you won't come across elsewhere?

I’m a bit confused about what you’re trying to do in this comment. Are you curious, and honestly trying to untangle the issues and consider the evidence for and against each claim? Or what project are you engaged in?

Your post gives the impression that Less Wrong can provide great insights for personal self-improvement. That is indisputable, but those who it might help won't read it anyway or won't be able to understand it. I just doubt that people like you learn much from it. What have you learnt from Less Wrong, how did it improve your life? I have no formal education but what I've so far read of LW does not seem very impressive in the sense that there was nothing to disagree with me so that I could update my beliefs and improve my decisions. I haven't come across any post that gave me some feeling of great insight, most of it was either obvious or I figured it out myself before (much less formally of course). The most important idea associated with Less Wrong seems to be that concerning friendly AI. What's special about LW is the strong commitment here regarding that topic. That's why I'm constantly picking on it. And the best arguments for why Less Wrong hasn't helped to improve peoples perception about the topic of AI in some cases is that they are intellectual impotent. So if you are not arguing that they should give up, but rather learn more, then I ask how if not through LW, which obviously failed.

To give a summary. Everyone interested to consider reading the sequences won't be able to spot much that he/she doesn't already know or that seems unique. The people who Less Wrong would help the most do not have the necessary education to understand it. And the most important conclusion, that one should care about AGI safety, is insufficiently differentiated from the huge amount of writings concerned with marginal issues about rationality.

Comment author: [deleted] 19 November 2010 04:49:58PM *  15 points [-]

Let me put down a few of my own thoughts on the subject.

I think it's odd that LessWrong spends so much time pondering whether or not it should exist! Most blogs don't do that; most communities (online or otherwise) don't do that. And if a person did that, you'd consider her rather abnormal. I view such discussion as noise; or at least a sidebar to more interesting topics.

I disagree that LW can't be useful to anyone who'd understand it. I offer my own experience as an example: it was useful to me in several ways.

  1. LW clinched my own break with religion (particularly the essay "Belief in Belief." )

  2. Eliezer's explanation of quantum physics is very interesting, intuitive, and as far as I know isn't replicated in any textbook.

  3. LW introduced me to futurist topics that I simply hadn't heard of, or realized that sensible people thought about (cryonics, the Singularity).

  4. I met a few real-life friends through LW, for whom I have a lot of respect.

  5. Finally, as far as instrumental rationality goes, LW took the place of two other, lower-quality internet forums in my free-time budget, so I spend more time out of my day trying to be thoughtful, rather than sleazy and goofy.

A couple of common topics on LW aren't all that interesting to me. Productivity/time management advice just strikes me as a bit of a guilt trip, which I can come up with by myself, thank you very much. I don't like models for how the mind works that aren't based in anything empirical -- I mistrust that sort of thing. (Not that professional researchers don't do the same thing!) I'm not a fan of the periodic gender wars and the oops-someone-mentioned-politics-and-we-all-went-crazy catastrophes. And I have a pet peeve with the local convention of using rather colorless language and speaking in very general terms and second-guessing themselves and each other all the time.

But apart from all that, it's a pretty damn good forum, and it does teach people new things.

Comment author: XiXiDu 19 November 2010 07:02:25PM 3 points [-]

Your recent post is a good example. A friendly math post! I gave up after reading "...within-cluster sum of squared differences..." :-)

Your points are really surprising. I do not interact with educated people in meatspace at all. I didn't think that someone who reached your level of education needed LW to break with religion. And I've always been the kind of person to take futuristic topics seriously by default, so that was no surprise at all to me. I guess that is why people here are irritated when I argue that science fiction authors talk about many topics discussed here for a long time. I don't see how the fictional exploration of concepts could lower the credence of the subjects.

Nevertheless, I never tried to argue that Less Wrong is useless. It's one of my favorite places in the metaverse.

Comment author: wedrifid 19 November 2010 07:13:40PM *  8 points [-]

I didn't think that someone who reached your level of education needed LW to break with religion.

People certainly ought not need to. By that I mean that people with the general cognitive capacity of humans have more than enough ability to evaluate religion as nonsense given even basic modern education. But even so it is damn hard to break free. Part of what makes the 'Belief in Belief' post particularly useful is that it is written in an attempt to understand what is really going on when people 'believe' things that don't make sense given what they know.

The social factors are also important. Religion is essentially about signalling tribal affiliation. It is dangerous to discard a tribal identity without already having found yourself a new tribe - even a compartmentalised online tribe used primarily for cognitive affiliation.

Nevertheless, I never tried to argue that Less Wrong is useless. It's one of my favorite places in the metaverse.

This is something I have to remind myself of when reading your comments. You are sincere. Not having known your online self at all some of your arguments and questions would seem far more rhetorical than you intended them. You actually do update on new information which is a big deal!

Comment author: [deleted] 19 November 2010 07:11:34PM 3 points [-]

I didn't realize I was being unclear in that last post! Clearly it's one of those things that takes practice. (In my defense I really don't know where the median LW reader is at math; the level of that post was a wild guess.)

Glad you're not opposed to LessWrong as a place. I'm not certain myself whether it really fulfills its stated goal of helping people come to conclusions more rationally. (When decisions are actually hard, when empirical evidence is sparse and trial-and-error is impossible, I'm not sure it's possible to decide rationally at all! )

I think one thing it does is promote a norm of measured thinking, where we keep our emotions at a conversational level instead of letting them shout. I've definitely noticed that attitude spilling out into my everyday life, and I find myself checking "do I think that's really plausible or am I just saying it?"

Comment author: XiXiDu 20 November 2010 10:01:53AM *  1 point [-]

I didn't realize I was being unclear in that last post!

No, that isn't it. It's just that the math was above my current level of education. It was all Chinese to me! That doesn't mean that I am against advanced math posts. I believe more technical posts would improve Less Wrong a lot. I loved the recent posts by cousin_it. Even though the key issues have been above my head they introduced me to so many new ideas. They gave me this feeling of discovering and learning something new and important. And the discussions they spawned have been of higher standard because nobody of lower education dared to say much. They also spawned awesome comments like this one. Your post is no different, just that I deferred reading it until I learnt the necessary math. Such posts actually give me incentive to learn more.

How to improve Less Wrong:

  • Write more technical posts (including math).
  • Either: Define the demographics. Explicitly mention the level of education necessary for all of Less Wrong.
  • Or: Introduce labels rating the level of difficulty for each post.
  • Provide more background knowledge in each post you write through references and links.

Example:

If P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X).

Someone like me has to look up each of the symbols. It would have been much easier this way: If P(Y|X) 1, then P(XY) ≈ P(X).

  • Advance the FAQ and link to it on the frontpage (When should I write a top-level article?; You must read the sequences before commenting etc.).
  • Be more kind to people who don't know better. Try to link them up and don't explain what's wrong but why and how they are wrong.

I think one thing it does is promote a norm of measured thinking, where we keep our emotions at a conversational level instead of letting them shout.

Yeah, I'm trying hard not to write without thinking. Sometimes I still fail, especially when I'm tired.

Comment author: wnoise 20 November 2010 12:18:52PM 2 points [-]

Someone like me has to look up each of the symbols. It would have been much easier this way: If P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X).

"If" should not go to Conditional_(Programming), but "Logical Implication", though I don't see the need for a link. It really is just the standard meaning of "if", and if people don't know the meaning of "if", advanced rationality is probably a bit beyond what they can immediately use.

"1" as a link to percentage is odd as well. It's just the number one. Yes, people are often more used to it as 100% in the context of probability, but the link doesn't clarify that in any useful way.

The links for conditional probability and conjunction are great though. It's quite possible to not be familiar with those particular bits of notation.

Comment author: XiXiDu 20 November 2010 01:23:18PM *  3 points [-]

I know, but you see what's the issue here. It actually has been a problem all my life. I'm really happy that there are now places like the Khan Academy and BetterExplained that actually explain such matters in a concise and straightforward way, not like school teachers who you never understand. Most of the time I only have to watch/read their explanation once to grasp it. Further they go into details you are never told about in school.

I guess I'm the kind of person who is unable to accept that 1+1=2 until someone explains the terms and operators. I only started with mathematics last year with a previous knowledge of basic arithmetic. Yet the first things I tried to figure out is what '+' actually means. That showed me that infix operators are functions and led me to the recursive and set theoretic definition of addition. Only at that point I have been satisfied. Which reminds me of a problem I had in German lessons back in elementary school. I always insisted to pronounce certain words the way I thought it was the most logical consistent to do, e.g. to pronounce 'st' not as 'sch'. Nobody ever told me that natural language evolved and that it is just an axiomatic definition, a cultural consensus to pronounce it in a certain way, not something you can infer from the general to the specific. So I kept pronouncing it the way I thought it was reasonable and ended up with bad grades. Such problems accumulated and I just stopped doing anything for school (also because I thought the other kids are all aliens). I'm only beginning to catch up for a few years now. English was the first thing I taught myself.

The links for conditional probability and conjunction are great though. It's quite possible to not be familiar with those particular bits of notation.

Hah! You must be one of those people who are only surrounded by educated folks. I don't know anyone in real-life who has any clue what a logical conjunction could be (been working as baker and doing roadworks). Something nasty maybe :-)

Comment author: wedrifid 19 November 2010 07:34:40PM 1 point [-]

I find myself checking "do I think that's really plausible or am I just saying it?"

I find myself checking "I think that's really plausible. That can't be good. I wonder what I should be saying instead to be socially successful." ;)

Comment author: wedrifid 19 November 2010 07:42:58PM 2 points [-]

A friendly math post! I gave up after reading "...within-cluster sum of squared differences..." :-)

It is easy for a math literate person to over-estimate how obvious certain jargon is to people. Like 'sum of squared differences' for example. Squared differences is just what is involved when you are calculating things like standard deviation. It's what you use when looking at, say, a group of people and deciding whether they all have about the same height or if some are really tall but others are really short. How different they are.

For those who have never had to manually calculate the standard deviation and similar statistics the term would just be meaningless. (Which makes your example a good demonstration of your point!)

Comment author: komponisto 20 November 2010 02:05:41AM *  9 points [-]

Squared differences is just what is involved when you are calculating things like standard deviation

Never mind that; just parse the damn phrase! All you need to know is what a "difference" is, and what "to square" means.

Why, I wonder, do people assume that words lose their individual meanings when combined, so that something like "squared differences" registers as "[unknown vocabulary item]" rather than "differences that have been squared"?

Comment author: wedrifid 20 November 2010 05:34:07AM 14 points [-]

Why, I wonder, do people assume that words lose their individual meanings when combined, so that something like "squared differences" registers as "[unknown vocabulary item]" rather than "differences that have been squared"?

Because quite often sophisticated people will punish you socially if you don't take special care to pay homage to whatever extra meaning the combined phrase has taken on. Caution in such cases is a practical social move.

Comment author: multifoliaterose 20 November 2010 05:53:24AM 5 points [-]

Good observation; I had been subliminally aware of it but nobody had ever pointed it out to me explicitly.

Comment author: kragensitaker 24 November 2010 01:12:33PM 1 point [-]

It's also very helpful to know things like why someone might go around squaring differences and then summing them, and what kinds of situations that makes sense in. That way you can tell when you make errors of interpretation. For example, "differences pertaining to the squared" is a plausible but less likely interpretation of "squared differences", but knowing that people commonly square differences and then sum them in order to calculate an L₂ norm, often because they are going to take the derivative of the result so as to solve for a local minimum, makes that a much less plausible interpretation.

And for a Bayesian to be rational in the colloquial sense, they must always remember to assign some substantial probability weight to "other". For example, you can't simply assume that words like "sum" and "differences" are being used with one of the meanings you're familiar with; you must remember that there's always the possibility that you're encountering a new sense of the word.

Comment author: Peter_de_Blanc 19 November 2010 07:58:24PM 2 points [-]

For those who have never had to manually calculate the standard deviation and similar statistics the term would just be meaningless. (Which makes your example a good demonstration of your point!)

Really? I think I would have understood that sentence before the first time I tried to calculate a standard deviation manually. In general, there are many ways to arrive at an understanding of a concept. I'm very skeptical of statements of the form "you can't understand X without doing Y first."

Comment author: wedrifid 19 November 2010 08:04:42PM 1 point [-]

I was being polite.

Comment author: XiXiDu 24 November 2010 04:30:13PM *  2 points [-]

What do you mean? Are you saying that everyone with an average IQ is supposed to be able to understand what it means to minimize the within-cluster sum of squared differences, regardless of education? I don't know what a standard deviation is either. I am able to read Wikipedia, understand what to do and use it. I know what squared means and I know what differences means. I just expected the sentence to mean more than the sum of its parts. Also I do not call the ability to use tools comprehension. What I value is to know when to use a particular tool, how to use it effectively and how it works.

You could teach stone-age people to drive a car. It would still seem like magic to them. Yet if you cloned them and exposed them to the right circumstance they might actually understand the internal combustion engine once grown up. Same IQ. Same as the server WolframAlpha is running on do possess a certain potential. Yet what enables the potential are the five million lines of Mathematica.

I'd be really surprised if one was able to understand the sentence the first time with a self-taught 1-year educational background in mathematics. That doesn't mean that there are exceptions, I'm not a prodigy.

Comment author: [deleted] 26 November 2010 01:53:39PM 1 point [-]

I think you're right. "Sum of squared differences" makes sense as a normal thing to do with data points only if you've learned that it's a measure of how spread apart they are, that it's equivalent to the variance, and that making the variance small is a good way to ensure that a cluster is "well clumped." There is a certain amount of intuition that's built up from experience.

Comment author: wedrifid 26 November 2010 12:34:26PM 0 points [-]

What do you mean? Are you saying that everyone with an average IQ is supposed to be able to understand what it means to minimize the within-cluster sum of squared differences, regardless of education?

No, approximately the opposite of that. Are you sure you didn't intend this to be a reply to Peter? It seems to be quite an odd reply to me in the context.

Comment author: John_Maxwell_IV 24 November 2010 08:32:19AM 2 points [-]

I think it's odd that LessWrong spends so much time pondering whether or not it should exist! Most blogs don't do that; most communities (online or otherwise) don't do that. And if a person did that, you'd consider her rather abnormal. I view such discussion as noise; or at least a sidebar to more interesting topics.

Oh come on. You really think the fact that no one else is doing it means it is a bad idea?

And besides, Hacker News also has periodic controversies over the fact that some of its users read it instead of hacking. My guess is that any forum populated by ambitious people will have periodic controversies over whether it should be killed off/re-channeled/etc. And that's a good thing.

Comment author: [deleted] 24 November 2010 07:52:06PM 0 points [-]

That is a good point. Although generally I'm a fan of conformity; it's often a sign that you're doing things right.

Comment author: John_Maxwell_IV 26 November 2010 09:42:56PM 0 points [-]

Sure, but that should be a very weak heuristic.

If you sample from the set of online communities you know of, you'll tend to see bigger and longer lasting ones more frequently than smaller and shorter-lasting ones. So by conforming with online communities you see, you're making your community larger and longer-lasting. That's not obviously a good thing.

Comment author: AnnaSalamon 19 November 2010 05:00:08PM *  13 points [-]

And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this[.]

My post didn't claim Less Wrong contains the best rationalists anywhere. It claimed that for many readers, Less Wrong is the best community of aspiring rationalists that they have easy access to. I wish you would be careful to be clear about exactly what is at issue and to avoid straw man attacks.

As to how to evaluate Less Wrongers’, or others’, rationality skills: It is hard to assess others’ rationality by evaluating their opinions on a small number of controversial issues. This difficulty stems partly from the difficulty of oneself determining the right answers (so as to know whether to raise or lower one’s estimate of others with those views). And it stems in part from the fact that a small number of yes/no or multiple-choice-style opinions will provide only limited evidence, especially given communities’ tendency to copy the opinions of others within the community.

One can more easily notice what processes LWers and others follow, and one can ask whether these processes are likely to promote true beliefs. For example, LWers tend to say they’re aiming for true beliefs, rather than priding themselves in their faith, optimism, etc. Also, folks here do an above-average job of actually appearing curious, of updating their claims in response to evidence, of actively seeking counter-evidence, of separating claims into separately testable/evaluable components, etc.

At the risk of repeating myself: it is these processes that I, and at least some others, have primarily learned from the sequences/OB/LW. This material has helped me learn to actually aim for accurate beliefs, and it has given me tools for doing so more effectively. (Yes, much of the material in the sequences is obvious in some sense; but reading the sequences moved it from “somewhat clear when I bothered to think about it” to actually a part of my habits for thinking.) I’m a bit frustrated here, but my feeling is that you are not yet using these habits consistently in your writing -- you don’t appear curious, and you are not carefully factoring issues into separable claims that can be individually evaluated. If you do, we might make more progress talking together!

Comment author: multifoliaterose 19 November 2010 05:29:56PM *  10 points [-]

My impression is that XiXiDu is curious and that what you're frustrated by has more to do with his difficulty expressing himself than with closed-mindedness on his part. Note that he compiled a highly upvoted list of references and resources for Less Wrong - I read this as evidence that he's interested in Less Wrong's mission and think that his comments should be read more charitably.

I'll try to recast what I think he's trying to say in clearer terms sometime over the next few days

Comment author: AnnaSalamon 19 November 2010 07:43:50PM 6 points [-]

I agree with you, actually. He does seem curious; I shouldn't have said otherwise. He just also seems drawn to the more primate-politics-prone topics within Less Wrong, and he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.

Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.

Comment author: multifoliaterose 20 November 2010 06:53:06AM 2 points [-]

He just also seems drawn to the more primate-politics-prone topics within Less Wrong

Arguably the primate-politics-prone topics are the most important ones; the tendency that you describe can be read as seriousness of purpose.

he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.

Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.

Agreed.

Comment author: wedrifid 19 November 2010 07:51:33PM *  2 points [-]

Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.

Not to mention more pragmatic socially in the general case. Unless you believe you have the capacity to be particularly dominant in a context and wish to introduce yourself near the top of a hierarchy. Some people try that here from time to time, particularly those who think they are impressive elsewhere. It is a higher risk move and best used when you know you will be able to go and open a new set, I mean community, if your dominant entry fails.

Comment author: shokwave 20 November 2010 06:34:31AM *  2 points [-]

Some people try that here from time to time,

Confession: Having a few muddled ideas of signalling in mind when I joined LessWrong, I knew of this pattern (works really well at parties!) and decided that people here were too savvy, so I specifically focused on entering as low as possible in the hierarchy. I'm curious whether that was well-received because of various status reasons (made others feel higher-status) or because it was simply more polite and agreeable.

Comment deleted 19 November 2010 08:37:14PM *  [-]
Comment author: multifoliaterose 20 November 2010 07:27:02AM *  8 points [-]
  1. Though there are many brilliant people within academia, there is also shortsightedness and group-think within academia which could have led the academic establishment to ignore important issues concerning safety of advanced future technologies.

  2. I've seen very little (if anything) in the way of careful rebuttals of SIAI's views from the academic establishment. As such, I don't think that there's strong evidence against SIAI's claims. At the same time, I have the impression that SIAI has not done enough to solicit feedback from the academic establishment.

  3. John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.

  4. Concerning the apparent group think on Less Wrong: something relevant that I've learned over the past few months is that some of the vocal SIAI supporters on LW express views that are quite unrepresentative of those of the SIAI staff. I initially misjudged SIAI on account of past unawareness of this point.

  5. I believe that if you're going to express doubts and/or criticism about LW and/or SIAI you should take the time and energy to express these carefully and diplomatically. Expressing unclear or inflammatory doubts and/or criticism is conducive to being rejected out of hand. I agree with Anna's comment here.

Comment author: XiXiDu 20 November 2010 10:11:58AM 2 points [-]

John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.

Wow, that's cool! They read my mind :-)

Comment author: David_Gerard 19 November 2010 09:35:52PM *  3 points [-]

Even Eliezer Yudkowsky doesn't believe he's the smartest person alive. He's the founder of the site and set its tone early, but that's not the same thing.

Finding people smarter than oneself is essential to making oneself more effective and stretching one's abilities and goals.

For an example I'm closely familiar with: I think one of Jimmy Wales' great personal achievements with Wikipedia, as an impressively smart fellow himself, is that he discovered an extremely efficient mechanism for gathering around him people who made him feel really dumb by comparison. He'd be first to admit that a lot of those he's gathered around him outshine him.

Getting smarter people than yourself to sign up for your goals is, I suspect, one marker of success in selecting a good goal.

Comment deleted 20 November 2010 10:22:36AM [-]
Comment author: multifoliaterose 20 November 2010 03:52:18PM 3 points [-]

But it's getting better.

I agree; the average quality of your comments and posts has been increasing with time and I commend you for this.

When I read who multifoliaterose is I wanted to sink into the ground for that I even dare to bother you people with my gibberish.

This statement carries the connotation that I'm very important. At present I don't think that there's solid evidence in this direction. In any case; no need to feel self-conscious about taking my time, I'm happy to make your acquaintance and engage with you.

Comment author: Eneasz 01 December 2010 06:39:15PM 5 points [-]

To give a summary. Everyone interested to consider reading the sequences won't be able to spot much that he/she doesn't already know or that seems unique. The people who Less Wrong would help the most do not have the necessary education to understand it.

I disagree strongly. Using myself as my only data-point (flawed, I know, but deeply relevant for me) the exact opposite is true. I had enough education to understand (nearly) everything (some of the more advanced math took extra study to follow). But I had never been exposed to such a large amount of concentrated sanity in writing. The greatest asset of LW wasn't that it provided education I didn't have, but rather that it provided sanity I'd never been exposed to. That made a huge difference.

Comment author: Perplexed 20 November 2010 08:49:48PM 4 points [-]

How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this but did provide no evidence for outsiders to follow through on it. The only exceptional and novel thesis to be found on LW concerns decision theory which is not only buried but which one is not able to judge if it is actually valuable as long as one doesn't have a previous education. The only exceptional and novel belief (prediction) on here is that regarding the risks posed by AGI.

Obviously you cannot form a good judgment as to whether a person is a good rationalist by determining whether his opinion on a difficult subject matches your opinion. And, even more obviously, you can't do so based on Anna's authority.

Instead, you need to interact with the person on an issue of intermediate difficulty and notice whether what he says clears cobwebs from your mind and shines light in dark corners. Or whether you come away from the conversation more confused and in the dark than before.

You may notice that I am implicitly defining rationalism in terms of how well a person communicates rather than how well they think. And even more than that, I am focusing on how well he communicates with you, rather than how well he communicates in general. If you wish, you can object, saying "That is not rationalism". Well, perhaps not. But it is the characteristic you should seek out in your interlocutors.

Comment author: PhilGoetz 21 November 2010 05:50:48PM 0 points [-]

I have no formal education but what I've so far read of LW does not seem very impressive in the sense that there was nothing to disagree with me so that I could update my beliefs and improve my decisions. I haven't come across any post that gave me some feeling of great insight, most of it was either obvious or I figured it out myself before (much less formally of course).

That shouldn't happen, because many posts are controversial and/or have little support, and some posts contradict each other.

Comment author: XiXiDu 21 November 2010 07:40:47PM *  2 points [-]

I haven't been specific in what I said. There actually have been some posts that introduced me to new concepts and allowed me to feel more satisfied to believe certain things. Only because of LW I was able to compile this curriculum. Although there are many more insightful and novel comments in my opinion than there are posts. I don't want to appear arrogant here or downplay the value of Less Wrong. I actually believe it is one of the most important resources. I just haven't read enough of LW yet to notice any capital contradictions. That also means that there might be great insights I haven't come across yet. I also don't think that most ideas here need much support (the top-ranked post seems to be an outlier). But take a look at some popular posts, where do you disagree or what have they taught you that you didn't already come up with on your own? Take for example the Ugh fields. Someone like me who managed to abandon religion without any help on his own reads that post, agrees wholeheartedly and upvotes it. But has it helped me? No, I'm rather a person that naturally takes this attitude too serious, I consciously overthink things until I completely leave near-mode and operate in far-mode only. I thought your post on self-fulfilling correlations was awesome. But there was no novel insight for me in it either. I know lots of people who should read your post and would benefit from it a lot. But such people won't read it. People like me who visit a psychologist because they know they need help won't be surprised by the movie Contact when Jodie Foster admits it could have all been some illusion. People like me are naturally aware that they could be dreaming. Doubt and the possibility of self-delusion are fundamental premises. But the people who'd really have to go to a psychologist, or read Less Wrong, believe they are perfectly normal or don't need to be told anything.

What I'm trying to say is that if Less Wrong wants to change the world rather than being a place where hyper-rationalists can collectively pat their back, you need to think about how to reach the people who need to know about it. And you need feedback, you have to figure out why people like Ben Goertzel fail to share some conclusions being made here and update accordingly.

Comment author: timtyler 21 November 2010 07:57:53PM *  0 points [-]

Were there ever any references identifying the Scary Idea as an official SIAI belief?

I think that - if they comment at all - they would come back with something like:

OK - so you don't think that unconstrained machine intelligence is "highly likely" to automatically DESTROY ALL LIFE AS WE KNOW IT. So: what do you think the chances of that happening are?!?

Comment author: XiXiDu 22 November 2010 10:57:43AM *  0 points [-]

Does Eliezer believe that working on friendly AI and supporting friendly AI research is the most important and most rational way to positively influence the future of humanity? If he thinks so, then is it reasonable to suspect that his rationale for starting to write on matters of rationality was to plead his case for friendly AI research and convince other people that it is indeed the most effective way to help humankind? If not, what was his reason to start blogging on Overcoming Bias and Less Wrong? Why has he spent so much time helping people to become less wrong rather than working directly on friendly AI? How can you be less wrong and still doubt that you should support friendly AI research?

I still suspect that everything he does is a means to an end. I'm also the opinion that if one reads all of Less Wrong and is afterwards (in the case one wants to survive and benefit humanity) still unable to conclude that the best way to do so is by supporting the SIAI, then either one did not understand due to a lack of intelligence or Less Wrong failed to convey its most important message. Therefore you should listen to the people who have read Less Wrong and disagree. You should also try to reach the people who haven't read Less Wrong but should read it because they are in a position that makes it necessary for them to understand the issues in question.

Comment author: timtyler 22 November 2010 07:28:17PM *  1 point [-]

Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.

So, such beliefs don't seem particularly "far out" - to me.

FWIW, Yudkowsky describes his motivation in writing about rationality here:

http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/

Comment author: Vladimir_Nesov 21 November 2010 06:03:27PM 0 points [-]

That shouldn't happen

...and is therefore evidence against your model (for what it's worth).

Comment author: orthonormal 22 November 2010 01:17:40AM 0 points [-]

It could also be an instance of this.

Comment author: Vladimir_Nesov 22 November 2010 01:41:12AM 0 points [-]

Aren't these one and the same? I had "for what it's worth" in there to account for uncertainty and probable weakness of the effect.

Comment author: orthonormal 22 November 2010 01:42:46AM 0 points [-]

Denotation and connotation.

Comment author: wedrifid 19 November 2010 05:29:33PM 2 points [-]

"I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified." Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do?

Your line of questioning here just seems strange in the context of the quote. The quote seems straightforward and not even all that relevant to whether lesswrong is useful for people who struggle to understand lesswrong. To the kind of people who have even a remote possibility of doing useful work on a seed AI it is just a trivial statement of Eliezer's personal opinion. While many don't agree with him Eliezer has written elsewhere on his opinion on academic orthodoxy as well as his own development with respect to approach to AI. Such opinions can just be taken with a grain of salt as they would be from anyone else.

This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

There is value in making things as accessible as possible where this can be done without sacrificing the depth of the content. At the same time there are always going to be people who are not capable of following content of complex topics, whether that be on rationality or anything else. Ultimately all communities whether online or off have a target demographic and are not for everyone.

Comment author: Louie 20 November 2010 12:50:24PM 2 points [-]

Thanks for your post Anna. It really got me thinking. I was going to write you a long comment here but it got too long so I made it into a new top level post.

And don't let the concern trolls get you down... remember... they totally support us!

Comment author: Vaniver 24 November 2010 08:52:24PM 1 point [-]

It seems to me that even if someone is a concern troll, you gain nothing from a rationality standpoint by identifying them as such, and often lose quite a bit. Have no villains.

Comment author: Armok_GoB 24 November 2010 08:42:33PM 0 points [-]

I hadn't noticed this was what was going on. It really needs to be brought to attention in the relevant threads much more clearly.

Comment author: multifoliaterose 18 November 2010 11:22:18PM *  2 points [-]

Thanks for making this post. I especially like the paragraph:

There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments really will be for the next forty years. For these goals we are forced to use reasoning, as failure-prone as human reasoning is. If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.

Comment author: Alicorn 18 November 2010 11:17:10PM *  2 points [-]

[safe bets](link AAA ratings?)

This and other clues lead me to believe that this post was published inadvertently.

Comment author: AnnaSalamon 19 November 2010 12:16:35AM 4 points [-]

Thanks, Alicorn. It was published advertently, but I'd failed to adequately check for typos; it's fixed now.

Comment author: Aurini 23 November 2010 11:22:20PM *  1 point [-]

I think a lot of the benefit I've derived from LessWrong is subconscious in nature - rational algorythms and basic Logic 101 are such a core part of the community that you wind up adopting them on a deeper level than you do just by learning about them. "If A then B, A therefore B, B =/= A" is easy to learn - 20 minutes at most for a particularly slow student, but applying it is a whole other story.

For example, an idea recently struck me during a conversation (I plan to write a more-detailed piece for my blog about this): in regards to Iraq, the alleged WMDs were a major source of debate, they were the causus belli, and back in the winterof 2002 their non-existence would have undermined the whole war. It only took about a year or two for proof of their non-existence to appear, but the debate continued raging for another five. Proof aside, most people still believed that WMDS - or 'something just like them' - actually existed.

With the pro-Iraq, pro-WMD side there was always the implied caveat "...but if I am wrong about WMDs, then I agree that the war is unjust." Their rigorous arguments, and dismissal of the facts, only make sense if the war in Iraq was contingent on WMDs being true. Certainly, there were people like Hitchens who supported the war for other reasons - but for most people WMDs were the be-all end-all argument for the war.

Nowadays you don't hear WMDs mentioned at all - presumably most people now accept that there were none - and yet those who believed in WMDs still support the war, or if they don't it's for different reasons - they aren't against the 2002 invasion, they're against the current situation.

Intellectual honesty on their part would require them to be against the war. Instead they a priori chose to be for the war, and WMDs were simply the argument they learned to vomit up. After that argument has been dismissed, they find other reasons to be for the war.

How does all of this relate to less wrong? Four years ago I don't think I would have noticed this. I'd taken classes in logic at the time, and I was well aware of a fuzzy version of "Politics is the Mindkiller" - but if I'd come up with such an idea, it would have taken a week of percolation. Nowadays this idea crystallized almost instantly between sips of beer.

LessWrong is mostly my 'play time' - but like most forms of play... well, most forms in the evolutionary setting, anyway - it's play that's focussed on real-world results.

I am quantifiably smarter for all the time I waste here - calling it a software patch or upgrade is bang on.

EDIT: Just wanted to say that I might have some of my dates wrong. For my blog I'll actually do the research so that I don't sound like an idiot, particularly in regards to the progress of the WMD debate. I'm fairly certain the evidence will back up my thoughts on the matter, but it is possible that my annecdotes don't reflect reality. Herp derp, probably wouldn't have thought of that without LW either.

Comment author: Vladimir_Nesov 23 November 2010 11:33:48PM *  2 points [-]

(As an out-of-context remark.)

"...but if I am wrong about WMDs, then I agree that the war is unjust."

Under the assumption that existence of WMDs justifies the war, the war was justified if the decision-makers believed that WMDs probably existed, as a result of a honest attempt to discern the truth. Whether WMDs actually existed is wholly irrelevant, except as evidence about state of knowledge of the decision-makers at the time.

(The decision to pull out of the war based on new evidence is a separate question, since situation is different.)

Comment author: Aurini 24 November 2010 12:17:10AM 0 points [-]

Except then you'd be right demanding they state something like "You were right, the war was unjustified; perhaps we need better monitoring of our leaders. My current stance is that the war should be continued for reasons X."

There are smart people who held consistent pro-war opinions, granted - personally I'm trying not to take a stance on the war itself, in this situation - but the majority are chock full of cognitive dissonance.

Comment author: Vladimir_Nesov 24 November 2010 12:19:02AM *  0 points [-]

There are certainly many problems associated with any heated debate. I only addressed one point (so your reply is not to me; at least, I didn't understand a single point you made in it, so perhaps something in there was intended to be on the point I addressed).

Comment author: Aurini 24 November 2010 05:17:13AM -1 points [-]

This is why I need to write a proper article about this. My first post was only a short hand sketch, which now leaves me feeling like the reverse-Homer Simpson ("Sorry if it SOUNDED sarcastic)."

:)