Practicing what you preach
LessWrongers as a group are often accused of talking about rationality without putting it into practice (for an elaborated discussion of this see Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality). This behavior is particularly insidious because it is self-reinforcing: it will attract more armchair rationalists to LessWrong who will in turn reinforce the trend in an affective death spiral until LessWrong is a community of utilitarian apologists akin to the internet communities of anorexics who congratulate each other on their weight loss. It will be a community where instead of discussing practical ways to "overcome bias" (the original intent of the sequences) we discuss arcane decision theories, who gets to be in our CEV, and the most rational birthday presents (sound familiar?).
A recent attempt to counter this trend or at least make us feel better about it was a series of discussions on "leveling up": accomplishing a set of practical well-defined goals to increment your rationalist "level". It's hard to see how these goals fit into a long-term plan to achieve anything besides self-improvement for its own sake. Indeed, the article begins by priming us with a renaissance-man inspired quote and stands in stark contrast to articles emphasizing practical altruism such as "efficient charity"
So what's the solution? I don't know. However I can tell you a few things about the solution, whatever it may be:
- It wont feel like the right thing to do; your moral intuitions (being designed to operate in a small community of hunter gatherers) are unlikely to suggest to you anything near the optimal task.
- It will be something you can start working on right now, immediately.
- It will disregard arbitrary self-limitations like abstaining from politics or keeping yourself aligned with a community of family and friends.
- Speaking about it would undermine your reputation through signaling. A true rationalist has no need for humility, sentimental empathy, or the absurdity heuristic.
Whatever you may decide to do, be sure it follows these principles. If none of your plans align with these guidelines then construct a new one, on the spot, immediately. Just do something: every moment you sit hundreds of thousands are dying and billions are suffering. Under your judgement your plan can self-modify in the future to overcome its flaws. Become an optimization process; shut up and calculate.
I declare Crocker's rules on the writing style of this post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (294)
It is probably simply structural that the LessWrong community tends to be about armchair philosophy, science, and math. If there are people who have read through Less Wrong, absorbed its worldview, and gone out to "just do something", then they probably aren't spending their time bragging about it here. If it looks like no one here is doing any useful work, that could really just be sampling bias.
Even still, I expect that most posters here are more interested to read, learn, and chat than to thoroughly change who they are and what they do. Reading, learning, and chatting is fun! Thorough self-modification is scary.
Thorough and rapid self-modification, on the basis of things you've read on a website rather than things you've seen tested and proven in combination, is downright dangerous. Try things, but try them gradually.
And now, refutation!
To, um, what, exactly? I think the question whose solution you're describing is "What ought one do?" Of these, you say:
That depends largely on your moral intuitions. I honestly think of all humans as people. I am always taken aback a little when I see evidence that lots of other folks don't. You'd think I stop being surprised, but it often catches me when I'm not expecting it. I'd suggest that my intuitions about my morals when I'm planning things are actually pretty good.
That said, the salient intuitions in an emotionally-charged situation certainly are bad at planning and optimization. And so, if you imagine yourself executing your plan, I would honestly expect it to feel oddly amoral. It won't feel wrong, necessarily, but it might not feel relevant to morality at all.
This is ... sort of true, depending on what you mean. You might need to learn more, to be able to form a more efficient or more coherent plan. You might need to sleep right now. But, yes, you can prepare to prepare to prepare to change the world right away.
Staying aligned with a community of family and friends is not an arbitrary limitation. Humans are social beings. I myself am strongly introverted, but I also know that my overall mood is affected strongly by my emotional security in my social status. I can reflect on this fact, and I can mitigate its negative consequences, but it would be madness to just ignore it. In my case - and, I presume, in the case of anyone else who worries about being aligned with their family and friends - it's terrifying to imagine undermining many of those relationships.
You need people that you can trust for deep, personal conversations; and you need people who would support you if your life went suddenly wrong. You may not need these things as insurance, you may not need to use friends and family in this way, but you certainly need them for your own psychological well-being. Being depressed makes one significantly less effective at achieving one's goals, and we monkeys are depressed without close ties to other monkeys.
On the other hand, harmless-seeming deviations probably won't undermine those relationships; they're far less likely to ruin relationships than they seem. Rather, they make you a more interesting person to talk to. Still, it is a terrible idea to carelessly antagonize your closest people.
No! If we're defining a "true rationalist" as some mythical entity, then probably so. If we want to make "true rationalists" out of humans, no! If you completely disregard common social graces like the outward appearance of humility, you will have real trouble coordinating world-changing efforts. If you disregard empathy for people you're talking to, you will seem rather more like a monster than a trustworthy leader. And if you ever think you're unaffected by the absurdity heuristic, you're almost certainly wrong.
People are not perfect agents, optimizing their goals. People are made out of meat. We can change what we do, reflect on what we think, and learn better how to use the brains we've got. But the vast majority of what goes on in your head is not, not, not under your control.
Which brings me to the really horrifying undercurrent of your post, which is why I stayed up an extra hour to write this comment. I mean, you can sit down and make plans for what you'll learn, what you'll do, and how you'll save billions of lives, and that's pretty awesome. I heartily approve! You can even figure out what you need to learn to decide the best courses of action, set plans to learn that, and get started immediately. Great!
But if you do all this without considering seemingly unimportant details, like having fun with friends and occasionally relaxing, then you will fail. Not only will you fail, but you will fail spectacularly. You will overstress yourself, burn out, and probably ruin your motivation to change the world. Don't go be a "rationalist" martyr, it won't work very well.
So, if you're going to decompartmentalize your global aspirations and your local life, then keep in mind that only you are likely to look out for your own well-being. That well-being has a strong effect on how effective you can be. So much so that attempting more than about 4 hours per day of real, closely-focused mental effort will probably give you not just diminishing returns, but worse efficiency per day. That said, almost nobody puts in 4 hours a day of intense focus.
So, yes, billions are miserable, people die needlessly, and the world is mad. I am still going out tomorrow night and playing board games with friends, and I do not feel guilty about this.
If nothing else, the assertion that the right and rational think will not feel like the right thing to do really needs support. Our moral intuitions may not be perfect, but there are definite parallels between small communities of hunter-gatherers and modern society, that make a fair portion of those intuitions applicable presently.
That it was just laid out without even a reference to back it up… come on, here.
Depending on your goal (rationality is always dependend on a goal, after all), I might disagree. Rational behaviour is whatever makes you win. If you view your endeveaur as a purely theoretical undertaking, I agree, but if you consider reality as a whole you have to take into account how your behaviour comes across. There are many forms of behaviour that would be rational but would make you look like an ass if you don't at least take your time to explain the reasons for your behaviour to those that can affect your everyday life.
Rational behavior is whatever conforms to the principles of reason. Instrumentally rational behavior is whatever is the most rational behavior that achieves the expected agenda. You could call that latter form "winning" but that's an error, in my opinion. It seems related to the notion that since "winning" makes you "feel good", ultimately all agendas are hedonistic. It screams "fake utility function" to me. Sometimes there isn't a path to optimization; only to mitigation of anti-utility.
If some particular ritual of cognition—even one that you have long cherished as "rational"—systematically gives poorer results relative to some alternative, it is not rational to cling to it. The rational algorithm is to do what works, to get the actual answer—in short, to win, whatever the method, whatever the means.
Considering the problems you bring up, I think Less Wrong may benefit from increased categorization of thought by adding new levels other than Main and Discussion. And considering your advice, I'll try not to be overly humble/nice about it.
How wide of a net does Discussion have to cover?
That's a very inclusive category. But there is a large difference between "not yet ready" and "not suitable" for normal top level posts. Does it feel like that belongs in the same category and we shouldn't break it down more?
Less Wrong is supposed to be about refining the art of human rationality. Imagine if I said the following flawed argument about a steel refinery.
"Well, at the steel refinery we have refined steel, and then there's everything else that not yet ready or not suitable for steel. If we apply the Bessemer process to everything else that not yet ready or not suitable for steel, we get refined steel."
To say this is not the most effective way to think about refining is a huge understatement. This doesn't distinguish between carbon, iron, roofs, toilet paper, concrete building blocks, safety gear, or anything else that might be a useful component of a steel mill.
If we want to build a better mill to refine rationality, maybe we should have better topic labels than Refined Rationality and Everything Else.
Evidence? Who accuses them of this? One post (on Less Wrong itself!) is not evidence enough for this claim.
Since this barb is directed at me, I should respond. When I come across a superb intellect like Yudkowsky, I first shut up and read the bulk of what he has to say (in Yudkowsky's case, this is helpfully packaged in the sequences). Then I apply my modest intellect to exploring the areas of his thinking that I do not find convincing.
Note that the essay is not about "who gets to be in our CEV"; it is about whether the CEV should include all of humanity, or not. The ability to distinguish between these questions should be within the capability of a rationalist - although I expect your distortion is an intentional attempt to trivialise the subject for rhetorical effect.
Otherwise, what you have written boils down to this: “we should shut up and multiply. You people aren't shutting up and multiplying.”
Unfortunately, we are not consistent expected utility maximisers, so "shut up and multiply" can never be more than an ideal for unmodified and unextrapolated human beings. It is actually impossible to implement "shut up and mutliply" literally, if you aren't accurately described by a utility function.
Furthermore our introspective limitations, knowledge limitations and computational limitations give us no particular way of resolving conflicts between our values, even if we were expected utility maximisers. For example the value of enjoying an argument for its own sake and the value of arguing things in a strictly optimal attempt to minimise existential risk are somewhat opposed to one another. Yet even if I did have a personal utility function such that there existed an optimal way for my unmodified and unextrapolated self to resolve this conflict and maximise utility, I wouldn't know what it was!
It is sometimes fair to recommend that someone shut up and multiply, but I would only do so (I hope!) when the stakes are extreme enough that they outweigh this inconsistency. I might also do so in a specific discussion in which someone was conflicted about whether they should do something, because SUAM seems like the best possible answer if someone is going to ask what they "should" do.
But since neither of these conditions applies, there is really no basis for you saying that I or anyone else should not have arguments and discussions for enjoyment's sake alone, unless you have good reason to think that the consequences are really extreme (for example, I criticised Eliezer for not shutting up and multiplying in his proposals for CEV. Those are extreme consequences).
That said, setting aside the fact that my ability to contribute intellectually is modest, can you really see no benefit in discussing important concepts such as CEV? Why is discussion of overcoming biases worthwhile, but not discussion of important strategies for the future of humanity?
Finally, although it scarcely seems necessary to say this, you cannot expect to be taken seriously with this kind of portentousness ("billions are suffering" - "become an optimization process") unless you have some serious achievements of your own to point to. If you do in fact have something to boast about, please go ahead and tell us about it.
For both subjects, if discussing them doesn't make someone better able to do something worth doing, then discussing it is not worthwhile. If it does make someone better able to do something worth doing, discussing it might be worthwhile.
It seems plausible to me that my reading, writing, and thinking about cognitive biases can noticeably help improve my understanding of, and ability to recognize, such biases. It seems plausible to me that such improvement can help me better achieve my goals. Ditto for other people. So I conclude that such discussion might be worthwhile.
It doesn't seem plausible to me that my reading, writing and thinking about CEV can noticeably help improve anyone's ability to do anything.
Okay then:
I think a more appropriate buzzword might be evaporative cooling of group beliefs. It's not immediately clear how "armchair rationalists" would be more predisposed to affective death spirals than instrumental rationalists.
altruism such as Efficient Charity. (Note the period.)
It won't feel
having evolved to
If you take it as axiomatic that instrumental rationalists are putting labor and effort into the material manifestations of instrumental rationality whereas 'armchair' rationalists merely discuss these ideas, then it becomes a necessity that the former be 'more rational' than the latter. And moreover, relegating the topic to a point of discourse without instantiation can be a form of affective death spiral.
Not that I necessarily agree with anything else in this post or thread -- just commenting on that point.
As I understand your post, the behavior you mean is talking about rationality without putting it into practice. But the way it is written sound to me like you mean accusing LW oftalking about rationality without putting it into practice.
Instead of "leveling up" you could have taken "efficient charity" as an example. I think you like that article better, so it seems more honest to me to take it as an example of a more practical post. Mentioning both articles like that makes it too obvious that you singled out "leveling up" for rhetoric reasons.
I'm very skeptical about those following points. You did not give convincing arguments for them.
That seems like bad advice. I think your guidelines are advocating reversed stupidity.
I think you are doing it wrong. Declaring Crocker's rule allows others to be harsh to you, it doesn't allow you to be harsh to others.
My reading of TwistingFingers's words was that s/he did mean "please feel free to be harsh about me", not "I wish to be free to be harsh about others". I don't see what other interpretation is possible, given "on the writing style of this post".
I think your interpretation is correct, and that's how I interpreted it, but I can understand Bobertron's interpretation as well. He thought TwistingFingers was declaring Crocker's rules as a sort of apology for the accusatory "writing style of [the] post", which would as Bobertron suggests be using the declaration in the wrong direction.
I only say this because you wrote:
I checked TwistingFingers's post history, and I noticed that he is also the author of a post entitled LessWrong gaming community.
Choice quote: "Many of us enjoy expressing ourselves through electronic games."
Quite how this squares with his aspiration to become an optimization process is beyond me. Optimizing for lulz, maybe.
This is DH1.
(I also see the OP as more signal than noise. But the norm for rebuttal here should usually be DH4 or higher.)
Not everything need be a rebuttal.
Incidentally, people constrained to DH4 or higher are gameable by common social practice.
Certainly, not every reply needs to be a rebuttal. But it usually is, here.
On the other hand, if you're going to rebut, and you think the other party is trying to argue honestly, your lower bound really should be around DH4 (counterargument) in a setting with many speakers. In a private setting, simply disagreeing (DH3) can be useful to just explain internal state. "I disagree with X, but I'm not sure why. Hm..." But it's logically rude to state simple disagreement as if it were an actual argument. :)
It wasn't intended as a rebuttal; I have already provided that in another lengthy comment.
I was merely identifying TwistingFingers as a blatant troll. Just for fun:
Juxtapose that with "Just do something: every moment you sit hundreds of thousands are dying and billions are suffering" written less than one month later.
Applause light/ more claims without evidence.
An utterly ludicrous implication.
This sounds like Chomskybot applied to Lesswrong jargon.
Can you really not see that this guy is taking the Mickey?
Another plausible interpretation of TF's flip-flopping is that a month ago, xe was here because xe thought it was a fun community, and then xe got "converted" into an earnestly zealous and quite naive Singularitarian. Much of TF's vitriol, then, would implicitly target xer lackadaisical past self in order to (consciously or unconsciously) distance xer current self from the pre-conversion self.
Mind you, I'm not checking TF's history myself, so this might be a bad guess. I'm just pointing out a pretty plausible alternate hypothesis.
I realize that this a trivial issue, but if you care about inferential distance, I thought you should know that I had to look this expression up, and I suspect a lot of other non-UK readers would as well.
For those who don't know, Urban Dictionary says that "taking the Mickey" means "joking, or doing something without intent".
(I rather like this system of using DH shorthands for diagnosing the problems with peoples comments. Possibly we can develop similar systems for other logical issues.)
Meta-comment: most replies at time of posting seem to be questioning whether a problem exists and quibbling with the style of the post, rather than proposing solutions. This doesn't seem like a good sign.
Proposed solution: If we consider rationality as using the best methods to achieve your goals (whatever they may be) then there are direct ways the Less Wrong material can help.
Firstly, define you goals and be sure that they are truly your goals.
Secondly when pursuing your goals retrieve information as needed that helps you make better decisions and hence reach your goals.
Example
I may wish to be leader of my country. I will then use the luminosity and introspection resources to determine whether that is my primary goal, or whether I am motivated by some other factor (desire for power/respect/fame/doing good). Then when making decisions in pursuit of that goal I will use the methods outlined to do so more accurately (determining whether my choice is influenced by confirmation bias, using Bayesian statistics to calculate the probability of a certain task succeeding and plan accordingly, use knowledge to better deal with other people, etc.).Importantly I am accessing this knowledge when it contributes to my larger goals, rather than pursuing it for its own sake.
How does that sound as a method?
Edit: If people have objections please voice them rather than silently downvoting.
Objection: it is highly irrational to propose solutions to non-existent problems. Insofar as someone considers the OP to have failed to raise a genuine problem, there is every reason for them not to start proposing solutions.
Furthermore, as another commenter has pointed it is an act of generosity to interpret him as having coherently stated any particular problem at all.
My interpretation of the original post was that they were identifying the problem that LW posters are 'talking about rationality without putting it into practice .' I then attempted to give an example of how one could instrumentally use the rationality techniques discussed on the site to achieve ones goals.
Whether or not it is the case that LW is failing to apply rationality techniques enough is an empirical question that I agree the OP hasn't proven. However whether or not it is the case demonstrations of how instrumental rationality might work still seem to be a useful exercise.
My top comment was semi-flippantly pointing out that commenters are doing what the OP accused them of by discussing the post rather than what seems the more useful task of proposing solutions.
Possibly I am interpreting the OP generously in the problem they are presenting, but I don't understand why this is a bad thing. When meaning is uncertain surely it is best to assume the most creditable interpretation in order to move discussion forward? (And contributes to general norms of politeness.)
I don't really understand what the problem you're diagnosing is supposed to be or what it is you're asking for.
First silly thing coming to mind: "Use rationality to determine an end goal, and a rational authority to trust. Then condition yourself to follow both blindly and without exception.Then stop caring about if you're being rational or not. "
Yea, it's silly. No, I'm not endorsing it or even saying it's any less silly than it sounds. But it DOES fulfil your criteria.
Surely if those things go against the Grand Maximally Efficient Thing To Do, they should be shed away. But in general, if they are not an obstacle, they make our life a little more pleasant. Ah, but a true human rationalist can really do without humility, sentimental empathy or the absurdity heuristic? Are those things something humans can do without, if they want to?
And more: how do you know that the Solution is the correct Solution?