I have a problem: I'm not sure what this community is about.

To illustrate, recently I've been experimenting with a number of tricks to overcome my akrasia. This morning, a succession of thoughts struck me:

  1. The readers of Less Wrong have been interested in the subject of akrasia, maybe I should make a top-level post of my experiences once I see what works and what doesn't.
  2. But wait, that would be straying into the territory of traditional self-help, and I'm sure there are already plenty of blogs and communities for that. It isn't about rationality anymore.
  3. But then, we have already discussed akrasia several times, isn't this then also on-topical?
  4. (Even if this was topical, wouldn't a simple recount of "what worked for me" be too Kaj-optimized to work for very many others?)

Part of the problem seems to stem from the fact that we have a two-fold definition of rationality:

  1. Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
  2. Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

If this community was only about epistemic rationality, there would be no problem. Akrasia isn't related to epistemic rationality, and neither are most self-help tricks. Case closed.

However, by including instrumental rationality, we have expanded the sphere of potential topics to cover practically anything. Productivity tips, seduction techniques, the best ways for grooming your physical appearance, the most effective ways to relax (and by extension, listing the best movies / books / video games of all time), how you can most effectively combine different rebate coupons and where you can get them from... all of those can be useful in achieving your values.

Expanding our focus isn't necessarily a bad thing, by itself. It will allow us to attract a wider audience, and some of the people who then get drawn here might afterwards also become interested in e-rationality. And many of us would probably find the new kinds of discussions useful in their personal lives. The problem, of course, is that epistemic rationality is a relatively narrow subset of instrumental rationality - if we allow all instrumental rationality topics, we'll be drowned in them, and might soon lose our original focus entirely.

There are several different approaches as far as I can see (as well as others I can't see):

  • Treat discussions of both as being fully on the same footing - if i-rationality discussions overwhelm e-rationality ones, that's just how it goes.
  • Concentrate purely on e-rationality, and ban i-rationality discussions entirely.
  • Allow i-rationality discussions, but don't promote top-level posts on the topic.
  • Allow i-rationality discussions, but require a stricter criteria for promoting top-level posts on the topic.
  • Allow i-rationality discussions, but only in the comments of dedicated monthly posts, resembling the "open topic" and "rationality quotes" series we have now.
  • Allow i-rationality discussions, but try to somehow define the term so that silly things like listing the best video games of all time get excluded.
  • Screw trying to make an official policy on this, let's just see what top-level posts people make and what gets upvoted.
  • Some combination of the above.

I honestly don't know which approach would be the best. Do any of you?

New to LessWrong?

New Comment
51 comments, sorted by Click to highlight new comments since: Today at 1:59 PM

Upvoted for not being about gender.

If you ask me, the term "instrumental rationality" has been subject to inflation. It's not supposed to mean better achieving your goals, it's supposed to mean better achieving your goals by improving your decision algorithm itself, as opposed to by improving the knowledge, intelligence, skills, possessions, and other inputs that your decision algorithm works from. Where to draw the line is a matter of judgment but not therefore meaningless.

Agreed. Systematic instrumental rationality is what we're interested in. Better general methods. Akrasia and the problem of internal conflicts, fits this template; but making better coffee does not, however useful you may find it.

This is indeed a deep rabbit hole.

Could anyone here recommend areas where one could attempt to discuss some of society's more pressing issues using the very general methods described here? Politics and making better coffee?

While I agree such posts would not fit here, such discussions would serve as practice. If the community were similar to this one, ideally hard evidence and constructive criticism would be the norm.

Assuming promoted articles are subject to your veto, I don't see much harm in original posts of exceptional quality, even if they are either overly meta-LW or overly domain specific. Of course, one must draw the line at pictures of kittens.

Are we really bad enough at voting that we can't be trusted to downvote pictures of kittens?

I'm not certain that anyone can very reliably be trusted to downvote e.g. the failcat sequence.

Come on, if that doesn't demonstrate a relevant failure of rationality I don't know what does.

ETA: (insert standard convention for tagging this as an attempt at humor)

I strongly agree, and I'd like to add that I definitely see a place for this sort of instrumental rationality here.

[-]djcb15y150

I'd like to add a third kind of rationality that seems popular in these circles: ''recreational rationality''. This would include things like the Prisoner's Dilemma, Newcomb's Paradox, the Monty Hall Problem, the "rationality quotes"-series and so forth.

Even though they are seldom useful in real-world decision making, they are simply ''interesting'' for many people that visit LW, I suspect -- including myself.

[-][anonymous]15y-30

the Prisoner's Dilemma

Upon reflection, this is exactly what sub-reddits are for. It should be trivial to turn on this functionality in the LW code, if that's a path we want to go down.

I don't think LW has a large enough volume of posts to start subdividing it yet.

[-]matt15y10

Technical agreement from one of the devs. If you can get more upvotes on your comment or Eliezer's attention, we can turn this on quickly.

How does that work, exactly? I can't find the info on it looking at Reddit.com.

Across the top is a bar that will take you to some of the more popular subreddits: politics, pics, funny, etc. On reddit anyone can create a subreddit, here we would probably just use some preset categories. The default front page draws from some set of the subreddits (here it would probably be all of them, but users could go to the subreddit pages to see only posts in that category. users on reddit can subscribe and unsubscribe to subreddits, to determine what reddits are eligible to show on their individual front pages. A logged in user might go to reddit and only see posts from the science, technology, and programming subreddits.

Understood, thanks!

How would adding this to Less Wrong interact with RSS feeds?

The grandparent explains how subreddits work. Meanwhile over here one of the software developers working on Less Wrong says, "If you can get more upvotes on your comment or Eliezer's attention, we can turn [subreddits] on quickly".

In the parent of this comment, Alicorn asks, "How would adding this to Less Wrong interact with RSS feeds?"

Until one of the software developers gives a definitive answer, allow me to give a speculative answer. It would probably create more RSS feeds. Pro: more choices of what feeds to subscribe to. Con: subscribing and unsubscribing to feeds has a cost, namely the time and attention of the reader.

We seem to have a population here that already cares, and deeply, about rationality. I do trust them to upvote whatever has a lot to do with rationality and downvote whatever has too little to do with it. In fact, I'd go so far as to submit that we're doing something wrong if there aren't enough off-topic-ish, net-negative-karma posts; it would show that posters aren't taking quite enough risks as regards widening rationality's domain. I'm weary of the PUA and overly self-help-y talk, sure, but seeing nothing like it around here would be the dead canary in the coal mine.

Most self-help type stuff sucks. I consider self-help stuff at its best more useful than epistemic rationality stuff at its best, and I am optimistic that less wrong can produce self-help stuff at its best.

Just post it and see if it gets upvoted. We've got voting for a reason.

Voting is not moderation. It signals that an article is of interest, not that it is on-topic. AFAIK, you can't vote an article below 0, which means you can't distinguish between lack of interest, controversy, and inappropriateness. With that in mind, accurately assessing if an article is suitable to post is crucial to maintaining a healthy baseline of relevance so we can at least eliminate the latter as a factor.

Voting is not moderation. It signals that an article is of interest, not that it is on-topic.

I don't see the problem with a community originally formed to discuss one issue discovering that all their members are also interested in some other issue and beginning to intensively discuss it. If an article is of interest, why isn't that sufficient?

If we want detailed information about how our article is faring we can read the comments.

Subjects that are totally inappropriate for Less Wrong can still garner a vast and potentially interesting number of comments (I've seen comments on PUA that really don't belong here, and yet sparked massive threads). There may even be a complete lack of criticism about how misplaced it may be, if it only attracts a certain audience. However, just because it generates interest here doesn't mean it belongs here.

By "does not belong" you presumably mean "does not fit the general pattern of content that came before it as I perceived it". Why do you think it is valuable for sites to maintain the same general pattern of content?

It's not that hard to hide threads. There is a little minus button next to every author's byline. The amount of damage an off-topic thread can do those who are not interested is quite minimal, but the amount it could help those who are interested is practically unbounded.

It's easy to think that any one thread is easy to hide, but online forums tend to devolve. We are a species that watches breakups on TV, if you don't maintain SOME pattern of content, then common content will bleed in and eventually dominate.

I would prefer a variation of bullet point number 3:

  • Allow i-rationality discussion, but promote only when it is an application of a related, tightly coupled e-rationality concept.

I am here for e-rationality discussion. It's "cool" to know that deodorant is most effective when applied at night, before I go to bed, but that doesn't do anything to fundamentally change the way I think.

Upon some thought, I'd like the site to concentrate on e-rationality and throw out i-rationality. PUA, marketing, akrasia etc. have plenty of other sites dedicated to them. We should strive to make our own unique contribution in the general direction set by Eliezer and Robin on OB. I feel such a decision would make LW more interesting on average.

Most akrasia sites suck.

I thought of making a post of agreeing and disagreeing(and overall discussion), but then I encountered same kinda problem: I wasn't sure if it was on-topic, and when thinking further, I had trouble pin-pointing the exact topic I should be posting things on.

I'm still going to make that post, though. Eventually. I trust that you'll protect your garden, on case that one of the many possible things goes wrong.

My two cents on the actual topic here(Disclaimer, I'm a newb): I think difference with i and e is kinda like math and physics. Even if you knew everything about the world as it is(complete understanding of the physics), you'd wanna have efficient math to make use of that information. And even if you have all the mathematical understanding there ever could be, but you lacked any information about the world, the math would be useless. Winning is everything, and for that you need both instrumental and epistemic rationality.

[-][anonymous]15y20

The concept "achieving your values" doesn't deserve the term "instrumental rationality". If it does, then, as you point out, works about instrumental rationality are merely works about how to do stuff. You're giving a fancy new name to an old concept.

ETA: Not that that's always exactly what we mean when we say "instrumental rationality", of course ...

How about the given definition of "epistemic rationality"? This is also really general: it's how to know stuff. Granted, that's precisely what being less wrong means, but we're not interested in general education. Granted again, the top-rated post of all time, "Generalizing From One Example", is definitely epistemic rationality but not obviously any other type of rationality.

So, here I propose some other definitions of "rationality":

Aumann rationality: a person is Aumann rational if they are rational (don't interpret this circularly!), they believe other people are Aumann rational, and other people believe they are Aumann rational. Perfect Aumann rationality causes people to never disagree with each other, but it's a spectrum. Eliezer Yudkowsky is relatively Aumann rational; people on Less Wrong are expected to be quite Aumann rational with each other; people in political debates have very little Aumann rationality.

Rational neutrality: though people who are rational-neutral discard evidence regarding statements, as any intelligent being must, their decision whether to discard a piece of evidence or not is not based on the direction/magnitude of it--if they ignore an observation, they do so without first seeing what it is.

Krasia: quite unrelated to any other type of rationality, people with high krasia are good at going from believing that an action would result in high expected utility to actually taking that action.

people on Less Wrong are expected to be quite Aumann rational with each other

I expect that anyone who expected this has already been quite disappointed. ;-)

people who are rational-neutral discard evidence regarding statements, as any intelligent being must

I feel like I should be able to find this out on my own, but I've had no success so far. Does "evidence regarding statements" refer to statements that are evidence-regarding, or evidence that regards statements? Either way I can't figure out an obvious reason to reject such things. Is it the idea that evidence shouldn't be discussed on any aspect beyond validity? I feel I'm missing something, many thanks to anyone who can throw me a link or other resource.

[-][anonymous]15y40

(If this post is too long, read only the last paragraph.)

Evidence that regards statements. I guess the "regarding statements" bit was redundant. Anyway, let me try to give some examples.

First, let me postulate a guy named Delta. Delta is an extremely rational robot who, given the evidence, always comes up with the best possible conclusion.

Andy the Apathetic is presented with a court case. Before he ever looks at the case, he decides that the probability the defendant is guilty is 50%. In fact, he never looks at the case; he tosses it aside and gives that 50% as his final judgement. Andy is rational-neutral, as he discarded evidence regardless of its direction; his probability is useless, but if I told Delta how Andy works and Andy's final judgement, Delta would agree with it.

Barney the Biased is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he decides to discard everything suggesting that the defendant is innocent; he concludes that the defendant has a 99.99% chance of being guilty and gives that as his final judgement. Barney is not rational-neutral, as he discarded evidence with regard to its direction; his probability is almost useless (but not as useless as Andy's), and if I told Delta how Barney works and Barney's final judgement, Delta might give a probability of only 45%.

Finally, Charlie the Careful is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he takes absolutely everything into account, running the numbers and keeping Bayes' law between his eyes at all times; eventually, after running a complete analysis, he decides that the probability that the defendant is guilty is 23.14159265%. Charlie is rational-neutral, as he discarded evidence regardless of its direction (in fact, he discarded no evidence); if I told Delta how Charliie works and Charlie's final judgement, Delta would agree with it.

So, here's another definition of rational neutrality I came up with by writing this: you are rational-neutral if, given only your source code, it's impossible to come up with a function that takes one of your probability estimates and returns a better probability estimate.

It might be useful to revise this concept to account for computational resources (see AI work on 'limited rationality', e.g. Russell and Wefald's "Do the Right Thing" book).

[-][anonymous]15y00

I'll try my best to get my hands on a copy of that book.

[-][anonymous]15y00

Upon thinking about that second definition of rational neutrality, I find myself thinking that that can't be right. It's identical to calibration. And even a rational-neutral agent that's been "repaired" by applying the best possible probability estimate adjustment function will still return the same ordinal probabilities: Barney the Biased, even after adjustment, will return higher probabilities for statements he is biased toward than statements he is biased against.

I would have said this:

So, here's another definition of rational neutrality I came up with by writing this: you are rational-neutral if, given only your source code and your probability estimates, it's impossible for someone to come up with better probability estimates.

...but that definition doesn't rule out the possibility that an agent would look at your probability estimates, figure out what the problem is, and come up with a better solution on its own. In the extreme case, no agent would be considered rational-neutral unless it had a full knowledge of all mathematical results. That's not what I want; therefore, I stick by my original definition.

It took two read-throughs to get this, but I'm fairly sure that's the concept and not your handling.Thanks for the explanation!

I'm going to start using "krasia". I hadn't encountered it before but apparently it's had some currency in epistemology.

I'm interested in both flavors of discussion, and especially in the relationship between them. It's hard to understand WHY consistent and world-matching epistemic beliefs don't AUTOMATICALLY cause winning instrumental behavior, and I think the difference is vital in figuring out the limits (and how to extend them) of our puny brains.

I'm also leery of too much meta control, especially when it seems to only hide rather than solve the underlying conflicts that are causing discomfort among some participants.

For these reasons, I prefer to avoid official policy and see what gets up-voted. There are interesting and thought-provoking posts on a wider variety of topics than I can predict in advance.

It's hard to understand WHY consistent and world-matching epistemic beliefs don't AUTOMATICALLY cause winning instrumental behavior

Because beliefs are individually like little machines that operate independently. Just because you have a belief that's true, doesn't mean you don't also have another ten thousand false beliefs you're not even aware of having.

[-]Jonii15y-10

It's hard to understand WHY consistent and world-matching epistemic beliefs don't AUTOMATICALLY cause winning instrumental behavior

Knowing the location of each house is necessary to solve the problem of traveling salesman, but the step of solving it is not trivial

I think that it's pretty easy to sort out Instrumental rationality from general how-to's: Instrumental rationality is about your preferences, options, and decision making process.

If a how to article is primarily about how to do things in your skull, then it probably fits here. If the coupon article couldn't be rewritten to talk about arbitrary credit widgets used by pebblesorters, then it may not fit.

Honestly, ordinary self-help doesn't do anything like cost-benefit analysis even implicitly so it doesn't try to help people to achieve their values. Business literature does often do implicit cost-benefit analysis. The best video games are very unlikely to make any list of

[-]djcb15y20

Indeed; ordinary self-help books seem to be specifically written to match what people like: anyone can achieve anything and it takes not really that much effort. Support for that is usually in the form of anecdotes or quotes from famous people. A favorite is Einstein's "Imagination is more important than knowledge", which sums up the genre pretty good: it refers to some smart person, it tells somethings people like to hear -- but it is really misleading.

Of course you can pick up ideas from self-help book and see what works for you. Fight akrasia with PCT or the 7 Habits or whatever; that might be quite useful. It has however nothing to do (I hope) with the kind of LW-rationality.

[-][anonymous]15y00

"Allow i-rationality discussions, but require a stricter criteria for promoting top-level posts on the topic." I buy this.

"Allow i-rationality discussions, but try to somehow define the term so that silly things like listing the best video games of all time get excluded." I want generality in i-rationality discussions.