It is probably simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to "just do something", then they probably aren't spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.
Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, learning, and chatting is fun! Thorough self-modification is scary.
Thorough and rapid self-modification, on the basis of things you've read on a website rather than things you've seen tested and proven in combination, is downright dangerous. Try things, but try them gradually.
And now, refutation!
So what's the solution?
To, um, what, exactly? I think the question whose solution you're describing is "What ought one do?" Of these, you say:
...It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near t
Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, >sentimental empathy, or the absurdity heuristic.
Depending on your goal (rationality is always dependend on a goal, after all), I might disagree. Rational behaviour is whatever makes you win. If you view your endeveaur as a purely theoretical undertaking, I agree, but if you consider reality as a whole you have to take into account how your behaviour comes across. There are many forms of behaviour that would be rational but would make you look like an ass if you don't at least take your time to explain the reasons for your behaviour to those that can affect your everyday life.
If nothing else, the assertion that the right and rational think will not feel like the right thing to do really needs support. Our moral intuitions may not be perfect, but there are definite parallels between small communities of hunter-gatherers and modern society, that make a fair portion of those intuitions applicable presently.
That it was just laid out without even a reference to back it up… come on, here.
I checked TwistingFingers's post history, and I noticed that he is also the author of a post entitled LessWrong gaming community.
Choice quote: "Many of us enjoy expressing ourselves through electronic games."
Quite how this squares with his aspiration to become an optimization process is beyond me. Optimizing for lulz, maybe.
LessWrongers as a group are often accused of talking about rationality without putting it into practice
Evidence? Who accuses them of this? One post (on Less Wrong itself!) is not evidence enough for this claim.
who gets to be in our CEV
Since this barb is directed at me, I should respond. When I come across a superb intellect like Yudkowsky, I first shut up and read the bulk of what he has to say (in Yudkowsky's case, this is helpfully packaged in the sequences). Then I apply my modest intellect to exploring the areas of his thinking that I do not fin...
I declare Crocker's rules on the writing style of this post.
Okay then:
in an affective death spiral
I think a more appropriate buzzword might be evaporative cooling of group beliefs. It's not immediately clear how "armchair rationalists" would be more predisposed to affective death spirals than instrumental rationalists.
altruism such as "efficient charity"
altruism such as Efficient Charity. (Note the period.)
It wont feel
It won't feel
being designed to
having evolved to
This behavior is particularly insidious because it is self-reinforcing
As I understand your post, the behavior you mean is talking about rationality without putting it into practice. But the way it is written sound to me like you mean accusing LW oftalking about rationality without putting it into practice.
...A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": [...] stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
I think you are doing it wrong.
My reading of TwistingFingers's words was that s/he did mean "please feel free to be harsh about me", not "I wish to be free to be harsh about others". I don't see what other interpretation is possible, given "on the writing style of this post".
Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
Considering the problems you bring up, I think Less Wrong may benefit from increased categorization of thought by adding new levels other than Main and Discussion. And considering your advice, I'll try not to be overly humble/nice about it.
How wide of a net d...
Meta-comment: most replies at time of posting seem to be questioning whether a problem exists and quibbling with the style of the post, rather than proposing solutions. This doesn't seem like a good sign.
Proposed solution: If we consider rationality as using the best methods to achieve your goals (whatever they may be) then there are direct ways the Less Wrong material can help.
Firstly, define you goals and be sure that they are truly your goals.
Secondly when pursuing your goals retrieve information as needed that helps you make better decisions and hence...
I don't really understand what the problem you're diagnosing is supposed to be or what it is you're asking for.
First silly thing coming to mind: "Use rationality to determine an end goal, and a rational authority to trust. Then condition yourself to follow both blindly and without exception.Then stop caring about if you're being rational or not. "
Yea, it's silly. No, I'm not endorsing it or even saying it's any less silly than it sounds. But it DOES fulfil your criteria.
A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Surely if those things go against the Grand Maximally Efficient Thing To Do, they should be shed away. But in general, if they are not an obstacle, they make our life a little more pleasant. Ah, but a true human rationalist can really do without humility, sentimental empathy or the absurdity heuristic? Are those things something humans can do without, if they want to?
And more: how do you know that the Solution is the correct Solution?
I'm aware of the typos. I am not allowed to edit for 6 more minutes. Please don't respond to this thread as I will delete it once I have edited the OP.
Have you read An Intuitive Explanation of Bayes' Theorem, or any of the other explanations of Bayesian reasoning on this site?
I have read them repeatedly, and explained the concepts to others on multiple occassions.
If you have a buddy who is a football buff who tells you that the Sportington Sports beat the Homeland Highlanders last night, then you should treat this as evidence that the Sportington Sports won,
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source -- it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
You do not need the person to relate their assessment of the evidence to revise your belief upward based on their statement, you only need to believe that it is more likely that they would make the claim if it were true than if it were not.
Which requires a reason to believe that to be the case. Which in turn requires that you have a means of corroborating their claim in some manner; the least-sufficient of which being that they can relate observations that correlate to their claim, in the case of experts that is.
If you want to increase the reliability of your probability estimate, you should ask for a justification.
A probability estimate without reliability is no estimate. Revising beliefs based on unreliable information is unsound. Experts' claims which cannot be corroborated are unsound information, and should have no weighting on your estimate of beliefs solely based on their source.
If an expert's claims are frequently true, then it can become habitual to trust them without examination. However, trusting individuals rather than examining statements is an example of a necessary but broken heuristic. We find the risk of being wrong in such situations acceptable because the expected utility cost of being wrong in any given situation, as an aggregate, is far less than the expected utility cost of having to actually investigate all such claims.
The more such claims, further, fall in line with our own priors -- that is, the less 'extraordinary' the claims appear to be to us -- the more likely we are to not require proper evidence.
The trouble is, this is a failed system. While it might be perfectly rational -- instrumentally -- it is not a means of properly arriving at true beliefs.
I want to take this opportunity to once again note that what I'm describing in all of this is proper argumentation, not proper instrumentality. There is a difference between the two; and Eliezer's many works are, as a whole, targetted at instrumental rationality -- as is this site itself, in general. Instrumental rationality does not always concern itself with what is true as opposed to what is practically believable. It finds the above-described risk of variance in belief from truth an acceptable risk, when asserting beliefs.
This is an area where "Bayesian rationality" is insufficient -- it fails to reliably distinguish between "what I believe" and "what I can confirm is true". It does this for a number of reasons, one of which being a foundational variance between Bayesian assertions about what kind of thing a Bayesian network is measuring when it discussed probabilities as opposed to what a frequentist is asserting is being measured when frequentists discuss probabilities.
I do not fall totally in line with "Bayesian rationality" in this, and various other, topics, for exactly this reason.
Not until such time as you have a reason to believe that he has a justification for his belief beyond mere opinion. Otherwise, it is a mere assertion regardless of the source -- it cannot have a correlation to reality if there is no vehicle through which the information he claims to have reached him other than his own imagination, however accurate that imagination might be.
If you know that your friend more often makes statements such as this when they are true than when they are false, then you know that his claim is relevant evidence, so you should adj...
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.