1 min read15th May 201179 comments

72

More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, in general, without any specific application.  This is probably the intended purpose of the site.  But they're starting to bore me.

What drew me to LessWrong is that it's a place where I can put rationality into practice, discussing specific questions of philosophy, value, and possible futures, with the goal of finding a good path through the Singularity.  Many of these topics have no other place where rational discussion of them is possible, online or off.  Such applied topics have almost all moved to Discussion now, and may be declining in frequency.

This isn't entirely new.  Applied discussions have always suffered bad karma on LW (statistically; please do not respond with anecdotal data).  I thought this was because people downvote a post if they find anything in it that they disagree with.  But perhaps a lot of people would rather talk about rationality than use it.

Does anyone else have this perception?  Or am I just becoming a LW old geezer?

At the same time, LW is taking off in terms of meetups and number of posts.  Is it finding its true self?  Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!)  Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of?

(ADDED: Some rationality posts are good.  I am also a lukeprog fan.)

New to LessWrong?

New Comment
79 comments, sorted by Click to highlight new comments since: Today at 12:01 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Agreed.

One person at the Paris meetup made the really interesting and AFAICT accurate observation that the more prominent a Less Wrong post was, the less likely it was to be high quality - ie comments are better than Discussion posts are better than Main (with several obvious and honorable exceptions).

I think maybe it has to do with the knowledge that anything displayed prominently is going to have a bunch of really really smart people swarming all over it and critiquing it and making sure you get very embarrassed if any of it is wrong. People avoid posting things they're not sure about, and so the things that get main-ed tend to be restatements of things that create pleasant feelings in everyone reading them without rocking any conceivable boat, and the sort of overly meta- topics you're talking about lend themselves to those restatements - for example "We should all be more willing to try new things!" or "Let's try to be more alert for biases in our everyday life!"

Potential cures include greater willingness to upvote posts that are interesting but non-perfect, greater willingness to express small disagreements in "IAWYC but" form, and greater willingness to downvote posts that are applause lights or don't present non-obvious new material. I'm starting to do this, but hitting that downvote button when there's nothing objectively false or stupid about a post is hard.

I agree that theoretical-sciency-mathy-insightful stuff is less common now than when Eliezer was writing posts regularly. I suspect this is largely because writing such posts is hard. Few people have that kind of knowledge, thinking ability, and writing skills, and the time to do the writing.

As someone who spends many hours writing posts only to have them nit-picked to death by almost everyone who bothers to comment, I appreciate your advice to "express small disagreements in 'IAWYC but' form."

As for your suggestion to downvote posts that "don't present non-obvious new material," I'm not sure what to think about that. My recent morality post probably contains only material that is obvious to someone as thoroughly familiar with LW material as yourself or Phil Goetz or Will Newsome or Vladimir Nesov or many others, but on the other hand a great many LWers are not quite that familiar, or else haven't taken the time to apply earlier lessons to a topic like morality (and were thus confused when Eliezer skipped past these basics and jumped right into 'Empathic Metaethics' in his own metaethics sequence).

0Scott Alexander13y
I enjoyed your morality post, as I do most of your posts, and certainly wouldn't accuse it of not presenting non-obvious new material.
9steven046113y
I don't find it hard, but whenever I vote a comment below zero for not adding anything, it just gets fixed back to zero by someone who probably wouldn't have voted otherwise.
7Will_Newsome13y
I had implicitly resignedly assumed that the bland re-presentation of old material and applause light posts were part of a consciously directed memetic strategy. Apparently I'd underestimated the size of the disgruntled faction. From now on I will be less merciful with my downvoting button.
2atucker13y
As the author of one of the rehash posts, I agree that these sorts of topics are generally pretty boring and uninteresting to read. There's nothing surprising or new in them, and they seem pretty obvious when you read them. But the point (of mine at least) wasn't really to expose any new material, so much as to try to push people into doing something useful. As far as I can tell, a large portion of the readers of LW don't implement various easy life-improvement methods, and it was really more intended as a push to encourage people to use them. On the one hand, a lot of interesting stuff on LW is "applied rationality" and its really fun to read, but I'm fairly skeptical as to how useful it is for most people. There's nothing wrong with it being interesting and fun, but there are other things to talk about.
2jsalvatier13y
Perhaps it would be easier and/or more constructive to comment 'I don't disagree with anything here, but I don't think this is valuable'?
4Scott Alexander13y
Perhaps, but I expect far fewer people would do so: it's less anonymous and more likely to cause confrontations/bad feelings.
2Nornagest13y
Sounds like a great time to invoke some strategic applied sociopathy.
2Rain13y
Well-Kept Gardens Die By Pacifism seems particularly relevant here.
1[anonymous]13y
One part of what's going on may be that the site allows anyone to register and vote, and so there's a feedback loop where people who are less like the core demographic and more like the rest of the internet come in and vote for posts that appeal more to average people from the internet, which in turn causes more average people from the internet to register and vote, and all this creates a pressure for the site to want to become every other site. Another part of what's going on may be that the site has been focusing more and more on the idea that rationality gives you easy and obvious personal superpowers (as opposed to just helping you figure out what goal to strive toward and with what strategies), and while I'm not saying there's no truth to that, it doesn't strike me as being why most of us originally got interested in these issues, and a lot of the support for it feels like it was selected to support an easily-marketable bottom line.

I'm somewhat puzzled by your terminology since the topics you call "meta-rationality":

about how to be rational, how to avoid akrasia, and so on.

strike me as much more practical and applied then the ones you call "applied rationality":

philosophy, value, and possible futures

which strike me as much more meta.

Going by the list of topics you're complaining about, it appears that you are the one who "would rather talk about rationality than use it."

Phil's terminology is probably the way I would have worded the same.

Posts that talk about things like "how do we use the anthropic principle", "what is morality", "what decision theory makes sense", "what is a mysterious answer to a mysterious question", etc. all seem object-level...

...whereas there's another class of posts that always uses the word "rationality" - ie "how can we be more rational in our lives", "how can we promote rationality", "am I a good enough rationalist if..." "who is/isn't a rationalist" et cetera, and these seem properly termed meta-level because they involve being rational about rationality.

I have a feeling the latter class of posts would benefit if they tried to taboo "rationality".

5Bongo13y
or: use rationality and don't mention it.
4David_Gerard13y
Bingo. Perhaps these would be good rewrite targets.
1jsalvatier13y
Much clearer than the original post.
2PhilGoetz13y
I see your point. I don't think of them as meta, because I see them as rungs on a ladder with a definite destination. I changed the wording a little.
6jsalvatier13y
Perhaps 'abstract' is a better word than 'meta' here.
8Gray13y
The prefix 'meta' is incredibly overused...just saying.
0wedrifid13y
Bravo
2Bongo13y
Yeah, I agree with PhilGoetz but downvoted because of bizarre terminology.

Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!)

Wow, I'd forgotten all about those. Those days were fun. We actually had to, well, think occasionally. Nothing remotely challenging has cropped up in a while!

Those days were fun. We actually had to, well, think occasionally. Nothing remotely challenging has cropped up in a while!

If you like thinking about challenging theoretical rationality problems, there are plenty of those left (logical uncertainty/bounded rationality, pascal's wager/mugging/decision theory for running on error-prone hardware, moral/value uncertainty, nature of anticipation/surprise/disappointment/good and bad news, complexity/Occam's razor).

I've actually considered writing a post titled "Drowning in Rationality Problems" to complain about how little we still know about the theory of rationality and how few LWers seem to be working actively on the subject, but I don't know if that's a good way to motivate people. So I guess what I'd like to know is (and not in a rhetorical sense), what's stopping you (and others) from thinking about these problems?

I'd like you to write that post.

3Wei Dai13y
Maybe I will, after I get a better idea why more people aren't already working on these problems. One reason not to write it is that the feeling of being drowned in big problems is not a particularly good one, and possibly de-motivating. Sometimes I wish I could go back to 1998, when I thought Bayesianism was a pretty much complete solution to the problem of epistemology, except for this little issue of what to expect when you're about to get copied twice in succession...

By the way, have you seen how I've been using MathOverflow recently? It seems that if you can reduce some problem to a short math question in standard terms, the default next action (after giving it your own best shot) should be posting it on MO. So far I've posted two problems that interested me, and both got solved within an hour.

1XiXiDu13y
It's all magic to me but it looks like a very effective human resource. Have you considered pushing MathOverflow to its limits and see if those people there might actually be able to make valuable contributions to open problems faced by Less Wrong or the SIAI? I assume that the main obstacle in effectively exploiting such resources as MathOverflow is to formalize the problems that are faced by people working to refine rationality or create FAI. Once you know how to ask the the right questions one could spread them everywhere and see if there is someone who might be able to answer them, or if there is already a known solution. Currently it appears to me that most of the important problems are not widely known, a lot of them being mainly discussed here on Less Wrong or on obscure mailing lists. By formalizing and spreading the gist of those problems one would be able to make people aware of Less Wrong and risks from AI and exploit various resources. What I am thinking about is analogous to a huge roadside billboard with a short but succinct description of an important problem. Someone really smart or knowledgeable might drive-by and solve it. Not only would the solution be valuable but you would win a potential new human resource.
5cousin_it13y
I'm all for exploiting resources to the limit! The bottleneck is formalizing the problems. It's very slow and difficult work for me, and the SIAI people aren't significantly faster at this task, as far as I can see.
0XiXiDu13y
No! What is particularly demotivating for me is that I don't know what heuristics I can trust and when I am better off trusting my intuitions (e.g. Pascal's Mugging). If someone was going to survey the rationality landscape and outline what we know and where we run into problems, it would help a lot by making people aware of the big and important problems.
9lukeprog13y
I suspect that clearly defining open rationality problems would act as a focusing lens for action, not a demotivator. Please do publish your list of open rationality problems. Do for us what Hilbert did for mathematicians. But you don't have to talk about 'drowning.' :)
2utilitymonster13y
Second the need for a list of the most important problems.
7wedrifid13y
Part of this issue for me, at least with respect to thinking about the problems in the context of lesswrong, is that the last efforts in that direction were stiffled rather brutally - to the extent that we lost one of the most technically orientated and prolific users. This isn't to comment on the rightness or wrongness of that decision - just a description of an influence. Having big brother looming over the shoulder specifying what you may think makes it work instead of fun. And not work that I have any particular comparative advantage in! (I can actually remember thinking to myself "folks like Wei Dai are more qualified to tackle these sort of things efficiently, given their prior intellectual investments". Creating of a decision theory email list with a large overlap of LW posters also served to dilute attention, possibly reducing self-reinforcing curiosity to some degree. But for me personally I have just had other things to be focussing my intellectual attention on. I felt (theoretical) rationality and decision theory get uncached from my brain as I loaded it up with biology and German. This may change in the near future. I'm heading over to Jasen's training camp next month and that is likely to kick the mental focus around about. I second cousin_it's interest in your aforementioned post! It would actually be good to know which problems are not solved as opposed to which problems I just don't know the solution to. Or, for that matter, which problems I think I know the solution to but really don't.
6Wei Dai13y
I do not really get this reaction. So what if Eliezer has a tendency to over-censor? I was once banned completely from a mailing list but it didn't make me terribly upset or lose interest in the subject matter of the list. The Roko thing seems even less of a big deal. (I thought Roko ended up agreeing that it was a mistake to make the post. He could always post it elsewhere if he doesn't agree. It's not as if Eliezer has control over the whole Internet.) I didn't think I had any particular advantage when I first started down this path either. I began with what I thought was just fun little puzzle in an otherwise well-developed area, which nobody else was trying to solve because they didn't notice it as a problem yet. So, I'm a bit weary about presenting "a list of really hard and important problems" and scaring people away. (Of course I may be scaring people away just through this discussion, but probably only a minority of LWers are listening to us.) I guess another factor is that I have the expectation that if someone is really interested in this stuff (i.e., have a "burning need to know"), they would already have figured out which problems are not solved as opposed to which problems they just don't know the solution to, because they would have tried every available method to find existing solutions to these problems. It seems unlikely that they'd have enough motivation to make much progress if they didn't have at least that level of interest. So I've been trying to figure out (without much success) how to instill this kind of interest in others, and again, I'm not sure presenting a list of important unsolved problems is the best way to do it.
2wedrifid13y
I'm not sure either. It would perhaps be a useful reference but not a massive motivator in its own right. What I know works best as a motivator for me is putting up sample problems - presenting the subject matter in 'sleeping hitchiker terrorist inna box' form. When seeing a concrete (albeit extremely counterfactual) problem I get nerd sniped. I am being entirely literal when I say that takes a massive amount of willpower for me to stop myself from working on it. To the extent that there is less perceived effort for tackling the problem for 15 hours straight than there is for putting it aside. And that can be the start of a self reinforcement cycle at times. The above is in contrast to just seeing the unsolved problems listed. That format is approximately inspiration neutral. By the way, is that decision theory list still active? I was subscribed but haven't seen anything appear of late.
5Wei Dai13y
That seems like a useful datum, thanks. It's still active, but nobody has made a post for about a month.
-1wedrifid13y
Ahh, there we go. Cousin_it just woke it up!
0Vladimir_Nesov13y
Discussing things that are already known can help in understanding them better. Also, the "burning need to know" occasionally needs to be ignited, or directed. I don't study decision theory because I like studying decision theory in particular, even though it's true that I always had a tendency to obsessively study something.

But decision theory ought to be a natural attractor for anyone with intellectual interests (any intellectual question -> how am I supposed to answer questions like that? -> epistemology -> Bayesianism -> nature of probability -> decision theory). What's stopping people from getting to the end of this path? Or am I just a freak in my tendency to "go meta"?

What's stopping people from getting to the end of this path?

The wealth of interesting stuff located well before the end.

3cousin_it13y
Seconding Eliezer. Also, please do more of the kind of thinking you do :-)
3Eliezer Yudkowsky13y
Yes, you're a freak and nobody but you and a few other freaks can ever get any useful thinking done and didn't we sort of cover this territory already?
8Wei Dai13y
I'm confused. Should I stop thinking about how exactly I'm "freaky" and how to possibly reproduce that "freakiness" in others? Has the effort already reached diminishing returns, or was it doomed from the start? Or do you think I'm just looking for ego-stroking or something?
0Davorak13y
Going meta takes resources. Resources could instead be applied directly to the problem in front of you. If not solving the problem right in front of you causes long term hard to recover from problems it makes sense to apply your resources directly to the problem at hand. So: Seems rational when enough excess resources are available. To make more people follow this path you need: * To increase the resources of those you are trying to teach. * Lower the resource cost of following the path Lesswrong.com and Lesswrong meetup groups teach life skills to increase the members resources. At the same time they gather people who know skills on the path with those who want to learn lowering the resource cost of following the path. Many other methods exist, I have just mentioned two. A road is being built it has just not reached where you are yet. Perhaps you are ahead of the road marking the best routes, or clearing the ground, but not everyone have the resources to get so far without a well paved road.
1Will_Newsome13y
Or morality! (Any action -> but is that the right thing to do? -> combinatorial explosion of extremely confusing open questions about cognitive science and decision theory and metaphysics and cosmology and ontology of agency and arghhhhhh.) It's like the universe itself is a Confundus Charm and nobody notices. How much of decision theory requires good philosophical intuition? If you could convince everyone at MathOverflow to familiarize themselves with it and work on it for a few months, would you expect them to make huge amounts of progress? If so, I admit I am surprised there aren't more mathy folk sniping at decision theory just for meta's sake.
4jimrandomh13y
I wasn't aware this list existed, but would be very interested in reading its archives. Do you have a link?
2Oscar_Cunningham13y
I second jimrandomh's interest in the mailing list? Can I be signed up for it? Are there archives?
0wedrifid13y
decision-theory-workshop.googlegroups.com I'm not sure who admins (and so can confirm new subscribers). It's a google group so the archive may well survive heat death.
3Oscar_Cunningham13y
Thanks. (BTW Google seems to be messing with the structure of the URLs for groups, the address that currently works is https://groups.google.com/group/decision-theory-workshop/ )
5Vladimir_Nesov13y
I've been less engaged with the old topics for the last several months while trying to figure out an updateful way of thinking about decision problems (understand the role of observations, as opposed to reducing them to non-observations as UDT does; and construct an ADT-like explicit toy model). This didn't produce communicable intermediate results (the best I could manage was this post, for which quite possibly nobody understood the motivation). Just a few days ago, I think I figured out the way of formalizing this stuff (which is awfully trivial, but might provide a bit of methodological guidance to future research). In short, progress is difficult and slow where we don't have a sufficient number of tools which would suggest actionable open problems that we could assign to metaphorical grad students. This also sucks out all motivation for most people who could be working on these topics, since there is little expectation of success and little understanding of what such success would look like. Even I actually work while expecting to most likely not produce anything particularly useful in the long run (there's only a limited chance for limited success), but I'm a relatively strange creature. Academia additionally motivates people by rewarding the activity of building in known ways on existing knowledge without producing a lot of benefit, but producing visible, and possibly of high quality, if mostly useless results that gradually build up to systematic improvements.
0cousin_it13y
Uhh, so why don't I know about it? Could you send an email to me or to the list?
0Vladimir_Nesov13y
Because it's awfully trivial and it's not easy to locate all the pieces of motivation and application that would make anyone enthusiastic about this. Like the fact that action and utility are arbitrary mathematical structures in ADT and not just integer outputs of programs.
1cousin_it13y
Hm, I don't see any trivial way of understanding observational knowledge except by treating it as part of the input-output map as UDT suggests. So if your idea is different, I'm still asking you to write it up.
0Vladimir_Nesov13y
In one sentence: Agent sees the world from within a logical theory in which observations are nonlogical symbols. I'll of course try to write this up in time.
0FAWS13y
I'm reasonably sure that's because the problem you see doesn't actually exist in your example and you only think it does because you misapplied UDT. If you think this is important why did you never get back to our discussion there as you promised? That might result in either a better understanding why this is so difficult for other people to grasp (if I was misunderstanding you or making a non-obvious mistake) or either a dissolution of the apparent problem or examples where it actually comes up (if I was right).
1Vladimir_Nesov13y
For some reason, I find it difficult to reason about these problems, and have never acquired a facility of easily seeing them all the way through, so it's hard work for me to follow these discussions. I expect I was not making an error in understanding the problem the way it was intended, and figuring out the details of your way of parsing the problem was not a priority. It feels emotionally difficult to terminate a technical discussion (where all participants invested nontrivial effort), while postponing it for a short time can be necessary, in which case there is an impulse to signal to others the lack of intention to actually stop the discussion, to signal the temporary nature of the present pause (but then, motivation to continue evaporates or gets revoked on reflection). I'll try to keep in mind that making promises for continuing the discussion is a bad, no good way of communicating this (it happened recently again in a discussion with David Gerard about merits of wiki-managing policies; I edited out the promise in a few hours). At this point, if you feel that you have a useful piece of knowledge which our discussion failed to communicate, I can only offer you a suggestion to write up your position as a (more self-contained) discussion post.
3lukeprog13y
Yes. There are tons of open, difficult rationality/philosophical problems. If they haven't 'cropped up in a while' on LW, it's because those who are thinking about them aren't taking the time to write about them. That's quite understandable because writing takes a lot of time. However, I tend to think that there are enough very smart rationalists on LW that if we can cogently bring everybody up to the cutting edge and then explain what the open problems are, progress will be made. That's really where I'm going with my metaethics sequence. I don't have the hard problems of metaethics solved; only the easy ones. I'm hoping that bringing everybody up to the cutting edge and explaining what the open problems are will launch discussions that lead to incremental progress.
1Will_Newsome13y
I have a vague intuition there's something interesting that could happen with self-modifying AIs with creator and successor states knowably running on error-prone hardware while having pseudo-universal hypothesis generators that will of course notice the possibility of values corruption. I guess I'm still rooting for the 'infinite reflection = contextually perfect morality' deus ex machina. Utility functions as they're normally idealized for imagining superintelligence behavior like in Basic AI Drives look an awful lot like self-protecting beliefs, which feels more and more decision theoretically wrong as time goes on. I trust the applicability of the symbols of expected utility theory less over time and trust common beliefs about the automatic implications of putting those symbols in a seed AI even less than that. Am I alone here? The reason I am not attempting to tackle those problems is because I hang out with Steve Rayhawk and assume that if I was going to make any progress I'd have to be roughly as smart and knowledgeable as Steve Rayhawk, 'cuz if he hasn't solved something yet that means I'd have to be smarter than him to solve it. I subconsciously intuit that as impossible so I try to specialize in pulling on less mathy yarns instead, which is actually a lot more possible than I'd anticipated but took me a long time to get passable at.
0timtyler13y
The current theory is all fine - until you want to calculate utility based on something other than expected sensory input data. Then the current theory doesn't work very well at all. The problem is that we don't yet know how to code: "not what you are seeing, how the world really is" in a machine-readable format.
0John_Maxwell12y
I don't think it's necessary to frame the large number of problems you identify as a case of "drowning". An alternative framing might be one about unexplored territory, something like "rationality is fertile ground for intellectual types who wish to acquire status by solving problems". As for why more people aren't working on them, it could come down to simple herd effects, or something like that.
0XiXiDu13y
Could someone point me to an explanation of what is meant by 'logical uncertainty'? This sounds incredible interesting, I would love to read it!
8cousin_it13y
Logical uncertainty is uncertainty about the unknown outputs of known computations. For example, if you have a program for computing the digits of pi but don't have enough time to run it, you have logical uncertainty about the billionth digit. You can express it with probabilities or maybe use some other representation. The mystery is how to formulate a decision process that makes provably "nice" decisions under logical uncertainty, and to precisely define the meaning of "nice".
0Risto_Saarelma13y
So basically the stuff you don't know because you don't have logical omniscience.

I'm certainly trying to apply rationality to solve big important problems, but that is taking me a while. About half of my posts so far have been written (sneakily) for the purpose of later calling back to them when making progress in metaethics and CEV.

9David_Gerard13y
That's why they're good, then: they have a real problem behind them.

I share Phil's perception that LW is devoting more time to what you might call "practical rationality in everyday life" and less to the theory of rationality, and his feeling that that's less interesting (albeit perhaps more useful).

I share everyone else's opinion that Phil's terminology is bizarre.

My main concern about Less Wrong recently has been the proliferation of posts related to the Singularity and HP: MoR, which I frankly don't care about. For a site that encourages people to think outside the box, it's at times biased against unorthodox opinions, or at least, I get downvoted for arguing against the Singularity and immortality and for pointing out flaws in MoR. At these times the site seems cultish in a way that makes me feel uncomfortable.

I was drawn here both by Eliezer's meta-rationality posts and by discussions about quantum mechanics, ph... (read more)

If you want to talk about about quantum mechanics, philosophy of mathematics, game theory, and such, why not start threads about those topics instead of arguing against the Singularity and immortality and pointing out flaws in MoR—things you don't even care about?

7PhilGoetz13y
I'm confused - you perceive a dissociation, yet you seem to agree with the emphasis on discussions of rationality. If we want LW to go in opposite directions, and both come to the conclusion that LW is going in the wrong direction, is there a conservation-of-evidence problem here? What would it take for someone to believe LW is going in the right direction?
6lucidfox13y
I think there's a false dichotomy here. You want LW to feature more discussions of applied rationality, of practical uses of the mental skills sharpened here. I want LW to feature more discussions about abstract matters, about the framework of rationality and the means to sharpen said skills. The two aren't necessarily mutually exclusive. One doesn't have to arrive at the expense of the other. What I don't want LW to become is a Singularity cult or a personality cult, or really any kind of cult - a community where anyone not sharing the group's mainstream opinions is considered wrong by default. I'm not saying LW is or has become that - generally, I found that at least discussions of non-mainstream opinions are welcome as long as they're backed by valid arguments - but I do see signs that it can possibly turn that way.

In my vision for the future of the rationalist community, most members are interested in the core of meta-rationality and anti-akrasia and each is interested in a set of peripheral topics (various ways of putting practicing rationality, problems like Sleeping Beauty, trading tutoring, practicing skills, helping the community in practical ways, study groups, social meetings with rationalists, etc.). Some fringe members will be involved in the peripherals and rationality applications but not theory, but they probably won't last long. LW is the core, and will... (read more)

I wish there were more posts that tried to integrate math with verbal intuitions, relative to posts that are either all the way on the math side or all the way on the words side.

It seems rathrer llike Eliezer Yudkowsky's blog without (much) Eliezer Yudkowsky.

Which is unfortunate - if understandable.

I think that less Singularity discussion is the result of the related topics having been already discussed many times over. There hasn't been a new idea in AI and decision theory since a while. I'm not implying though that we've finished these topics once and for all. There is certainly a huge amount of stuff to be discovered, it's just that we don't seem to happen upon much stuff these days.

Quality is a bigger concern than subject matter. But that is easily solved by just reading posts and mainly posts by Luke. :)

Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of?

"A community blog devoted to refining the art of human rationality" suggests those aren't actually the focus, and that when LW grows up it won't be about AI and the Singularity.

I do agree that some more application would be good, but that tends to go in discussion if at all. Better there than nowhere.

One of the big things about improving rationality is 'Getting Crap Done' and I think the problem is that for an online community wherein most of us are anonymous, there's not a lot on here to help us with that.

Now this site has helped me conceptualize and visualize in a way that I didn't realize was possible. It helped me to see things as they are, and how things could be. The problem is that whilst I'm flying ahead in terms of vision, I still sleep in and get to work late, I still play world of warcraft over going to the local toastmasters meetup, I still... (read more)

Is not 'how to be rational, how to avoid akrasia' how one puts 'rationality into practice'? Without hard working producers there is no singularity.

+1 for suitable filtering, or a decent subclustering that keeps everyone happy

[-][anonymous]13y00

I would bet that we'll soon see a resurgence of discussion on decision theory, anthropics etc. in the next few months. If I'm as typical a user as I think I am, then there are a dozen or so people who were largely drawn to LessWrong by those topics, but who stayed silent as they worked on leveling up. lukeprog's recent posts will probably accelerate that process.

[-][anonymous]13y00

More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, and so on. This is probably the intended purpose of the site. But they're starting to bore me.

Agree. The part them makes them boring is that the 'how to' stuff is, basically, rubbish. There are other communities dedicated to in the moment productivity guides. By people who know far more about the subject. Albeit people who maybe don't two box and are perhaps dedicating all their 'productivity' towards 'successful' but ultimately not very important goals.