It is probably simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to "just do something", then they probably aren't spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.
Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, learning, and chatting is fun! Thorough self-modification is scary.
Thorough and rapid self-modification, on the basis of things you've read on a website rather than things you've seen tested and proven in combination, is downright dangerous. Try things, but try them gradually.
And now, refutation!
So what's the solution?
To, um, what, exactly? I think the question whose solution you're describing is "What ought one do?" Of these, you say:
...It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near t
Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, >sentimental empathy, or the absurdity heuristic.
Depending on your goal (rationality is always dependend on a goal, after all), I might disagree. Rational behaviour is whatever makes you win. If you view your endeveaur as a purely theoretical undertaking, I agree, but if you consider reality as a whole you have to take into account how your behaviour comes across. There are many forms of behaviour that would be rational but would make you look like an ass if you don't at least take your time to explain the reasons for your behaviour to those that can affect your everyday life.
If nothing else, the assertion that the right and rational think will not feel like the right thing to do really needs support. Our moral intuitions may not be perfect, but there are definite parallels between small communities of hunter-gatherers and modern society, that make a fair portion of those intuitions applicable presently.
That it was just laid out without even a reference to back it up… come on, here.
I checked TwistingFingers's post history, and I noticed that he is also the author of a post entitled LessWrong gaming community.
Choice quote: "Many of us enjoy expressing ourselves through electronic games."
Quite how this squares with his aspiration to become an optimization process is beyond me. Optimizing for lulz, maybe.
LessWrongers as a group are often accused of talking about rationality without putting it into practice
Evidence? Who accuses them of this? One post (on Less Wrong itself!) is not evidence enough for this claim.
who gets to be in our CEV
Since this barb is directed at me, I should respond. When I come across a superb intellect like Yudkowsky, I first shut up and read the bulk of what he has to say (in Yudkowsky's case, this is helpfully packaged in the sequences). Then I apply my modest intellect to exploring the areas of his thinking that I do not fin...
I declare Crocker's rules on the writing style of this post.
Okay then:
in an affective death spiral
I think a more appropriate buzzword might be evaporative cooling of group beliefs. It's not immediately clear how "armchair rationalists" would be more predisposed to affective death spirals than instrumental rationalists.
altruism such as "efficient charity"
altruism such as Efficient Charity. (Note the period.)
It wont feel
It won't feel
being designed to
having evolved to
This behavior is particularly insidious because it is self-reinforcing
As I understand your post, the behavior you mean is talking about rationality without putting it into practice. But the way it is written sound to me like you mean accusing LW oftalking about rationality without putting it into practice.
...A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": [...] stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
I think you are doing it wrong.
My reading of TwistingFingers's words was that s/he did mean "please feel free to be harsh about me", not "I wish to be free to be harsh about others". I don't see what other interpretation is possible, given "on the writing style of this post".
Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
Considering the problems you bring up, I think Less Wrong may benefit from increased categorization of thought by adding new levels other than Main and Discussion. And considering your advice, I'll try not to be overly humble/nice about it.
How wide of a net d...
Meta-comment: most replies at time of posting seem to be questioning whether a problem exists and quibbling with the style of the post, rather than proposing solutions. This doesn't seem like a good sign.
Proposed solution: If we consider rationality as using the best methods to achieve your goals (whatever they may be) then there are direct ways the Less Wrong material can help.
Firstly, define you goals and be sure that they are truly your goals.
Secondly when pursuing your goals retrieve information as needed that helps you make better decisions and hence...
I don't really understand what the problem you're diagnosing is supposed to be or what it is you're asking for.
First silly thing coming to mind: "Use rationality to determine an end goal, and a rational authority to trust. Then condition yourself to follow both blindly and without exception.Then stop caring about if you're being rational or not. "
Yea, it's silly. No, I'm not endorsing it or even saying it's any less silly than it sounds. But it DOES fulfil your criteria.
A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Surely if those things go against the Grand Maximally Efficient Thing To Do, they should be shed away. But in general, if they are not an obstacle, they make our life a little more pleasant. Ah, but a true human rationalist can really do without humility, sentimental empathy or the absurdity heuristic? Are those things something humans can do without, if they want to?
And more: how do you know that the Solution is the correct Solution?
I'm aware of the typos. I am not allowed to edit for 6 more minutes. Please don't respond to this thread as I will delete it once I have edited the OP.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source -- it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adjust your confidence up. If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have only known him to make declarations about which team won a game when he knows which team won, then you have reason to believe that his statement is strongly correlated with reality, even if you don't know the mechanism by which he came to decide to say that the Sportington Sports won.
If you happen to know that your friend has just gotten out of a locked room with no television, phone reception or internet access where he spent the last couple of days, then you should assume an extremely low correlation of his statement with reality. But if you do not know the mechanism, you must weight his statement according to the strength that you expect his mechanism for establishing correlation with the truth has.
There is a permanent object outside my window. You do not know what it is, and if you try to assign probabilities to all the things it could be, you will assign a very low probability to the correct object. You should assign pretty high confidence that I know what the object outside my window is, so if I tell you, then you can assign much higher probability to that object than before I told you, without my having to tell you why I know. You have reason to have a pretty high confidence in the belief that I am an authority on what is outside my window, and that I have reliable mechanisms for establishing it.
If I tell you what is outside my window, you will probably guess that the most likely mechanism by which I found out was by looking at it, so that will dominate your assessment of my statement's correlation with the truth (along with an adjustment for the possibility that I would lie.) If I tell you that I am blind, type with a braille keyboard, and have a voice synthesizer for reading text to me online, and I know what is outside my window because someone told me, then you should adjust your probability that my claim of what is outside my window is correct downwards, both on increased probability that I am being dishonest, and on the decreased reliability of my mechanism (I could have been lied to.) If I tell you that I am blind and psychic fairies told me what is outside my window, you should adjust your probability that my claim is correlated with reality down much further.
The "trust mechanism," as you call it, is not a device that exists separate from issues of evidence and probability. It is one of the most common ways that we reason about probabilities, basing our confidence in others' statements on what we know about their likely mechanisms and motives.
This is an area where "Bayesian rationality" is insufficient -- it fails to reliably distinguish between "what I believe" and "what I can confirm is true".
You can't confirm that anything is true with absolute certainty, you can only be more or less confident. If your belief is not conditioned on evidence, you're doing something wrong, but there is no point where a "mere belief" transitions into confirmed knowledge. Your probability estimates go up and down based on how much evidence you have, and some evidence is much stronger than others, but there is no set of evidence that "counts for actually knowing things" separate from that which doesn't.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence
This is like claiming that because a coin came up heads twenty times and tails ten times it is 2x more likely to come up heads this time. Absent some other reason to justify the correlation between your friend's accuracy and the current instance, such beliefs are invalid.
...If he reliably either watches the game, or finds out the result by calling a friend or checking online, and you have onl
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.