Or, what do you want to see more or less of from Less Wrong?

I'm thinking about community norms, content and topics discussed, karma voting patterns, et cetera. There are already posts and comment sections filled with long lists of proposed technical software changes/additions, let's not make this post another one. 

My impression is that people sometimes make discussion posts about things that bother them, and sometimes a bunch of people will agree and sometimes a bunch of people will disagree, but most people don't care that much (or they have a life or something) and thus don't want to dedicate a post just to complaining. This post is meant to make it socially and cognitively easy to offer critique.

I humbly request that you list downsides of existing policies even when you think the upsides outweigh them, for all the obvious reasons. I also humbly request that you list a critique/gripe even if you don't want to bother explaining why you have that critique/gripe, and even in cases where you think your gripe is, ahem, "irrational". In general, I think it'd be really cool if we erred on the side of listing things which might be problems even if there's no obvious solution or no real cause for complaint except for personal distaste for the color green (for example).

I arrogantly request that we try to avoid impulsive downvoting and non-niceness for the duration of this post (and others like it). If someone wants to complain that Less Wrong is a little cultish without explaining why then downvoting them to oblivion, while admittedly kind of funny, is probably a bad idea. :)

New to LessWrong?

New Comment
162 comments, sorted by Click to highlight new comments since: Today at 9:58 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]13y570

I'd prefer more posts that aim to teach something the author knows a lot about, as opposed to an insight somebody just thought of. Even something less immediately related to rationality -- I'd love, say, posts on science, or how-to posts, at the epistemic standard of LessWrong. Also I'd prefer more "Show LessWrong" project-based posts.

I like how this was phrased positively, and suggested a specific way to fix it.

8Emile13y
Agreed; how about an open thread "what could you teach us about?" to gauge interest? A bit like this thread, but focusing on supply instead of demand. I'll post one in a few days if nobody else does first.
5badger13y
I wonder if an occasional discussion post where people can make requests or float ideas to gauge interest could help this. The motivation would be greater if you know there is an audience for your expertise.
3lukeprog13y
Agreed. There is lots of 'deep knowledge' in the brains of Less Wrongers, and I would love to see it shared!

Note: The following depicts my personal perception and feelings.

What bothers me is that Less Wrong isn't trying to reach the level of Timothy Gowers' Polymath Project but at the same time acts like being on that level by showing no incentive to welcome lesser rationalists or more uneducated people who want to learn the basics.

One of the few people here who sometimes tries to actually tackle hard problems appears to be cousin_it. I haven't been able to follow much of his posts but all of them have been very exciting and actually introduced me to novel ideas and concepts.

Currently, most of Less Wrong is just boring. Many of the recent posts are superb, clearly written and show that the author put a lot of work into them. Such posts are important and necessary. But I wouldn't call them exciting or novel.

I understand that Less Wrong does not want to intimidate most of its possible audience by getting too technical. But why not combine both worlds by creating accompanying non-technical articles that explain the issue in question and at the same time teach people the maths?

I know that some people here are working on decision theoretic problems and other technical issues related to rationality. Why don't you talk about it here on Less Wrong? You could introduce each article with a non-technical description or write an accompanying article that teaches the basics that are necessary to understand what you are trying to solve.

After seeing how highly your comment got upvoted, I just wrote an extremely hardcore post :-)

1Vladimir_Nesov13y
Nice, I actually planned to post this as a dependence for a post with a small list of technical improvements to ADT (such as "To avoid confusion, immediately perform any action that implies absurdity."), but didn't get around to writing it up.
0wedrifid13y
I just upvoted that post based on it having the phrase "Example decision theory problem" in the title. Now I'm going to actually read it. ;)
0Dr_Manhattan13y
I shall name this Karma Surfing. (not that I'm putting it down)
0lukeprog13y
I'd never heard of the Polymath Project. Thanks for the linky.
4wedrifid13y
Likewise. That kind of project is inspirational! Especially the part where it actually worked.

I'd like to see less discussion about karma.

9badger13y
I agree. In particular, it bothers me when people complain about downvotes, accuse others of downvoting them, or preface with "I know I'll be downvoted for this, but..."
-1nazgulnarsil13y
I feel such complaints are justified. downvotes without comments are counter productive.
0TimFreeman13y
There is an incentive to downvote without comment if you feel that your peers are better off if they don't see the post. If you're downvoting someone who happens to regard this as a political exercise rather than an intellectual exercise, they're likely to find an excuse to downvote you on one or more or many unrelated issues, so your karma is better if they don't know who you are. If you comment they will know who you are. This incentive would go away if we had a reasonable measure of agreement, and only let votes from the 90% or 99% or so of the people closest to the consensus affect what other people see. That might require significant CPU and thinking to implement, though, so I don't know if it's worth doing. Allowing cliques that are less than 50% might let the community fracture into halves that don't perceive each other, but if the clique size is 90% then the only consequence would be to ignore votes from the outliers, which is probably a good thing.
2HughRistik13y
Also, there is a disincentive to downvote bad comments that you want everyone to still see.
0TimFreeman13y
Somebody voted the parent comment down without replying. Given the context, that may have been a strange joke. I voted it up. In the present system, downvoting a comment causes fewer people to see it, since the system by default doesn't show you comments scoring below a user-settable threshhold. I like that feature. I can't presently imagine a plausible interpretation for downvoting that yields things I'd want to downvote but still would want my peers to look at. Can you give an example?
1syllogism13y
You post a detailed reply to a low-value comment, and want your reply seen even though you don't like the parent.
0badger13y
I agree comments are more useful, but are you saying someone should never downvote without leaving a comment? Why do you think it is actually counterproductive?
4nazgulnarsil13y
because it often seems that the person actually doesn't know why they've been downvoted.
4[anonymous]13y
I'll comment to state my agreement with this in addition to voting this up, because this was the first criticism that occurred to me, and might be one of the things that bothers me most.
3XiXiDu13y
Has this topic being discussed in detail? Personally, the reputation system mainly taught me how to play, but not for what reasons, other than maximizing my karma score. It works like a dog-collar, administering electric shocks when the dog approaches a certain barrier. The dog learns where it can go, on grounds of pain. Humans can often only infer little detail from the change of a number, the little they learn mostly being misinterpreted. People complaining about downvotes are a clear indication for this being the case. If people write, "I know I'll be downvoted for this, but...", what they mean is, that they learnt, that what they are going to write will be punished, but that they do not know why and are more than superficially interested to learn how they are wrong. Has it been shown that reputation systems cultivate discourse and teach novel insights rather than turning communities into echo chambers and their members into karma score maximizer's? If it was my sole intention, I could probably accumulate a lot of karma. Only because I often ignore what I learnt about the reputation system, and write what interests me, I manage to put forth some skepticism. But can a community, that is interested in truth and the refinement of rationality, rely on people to ignore the social pressure and strong incentive being applied by a reputation system, in favor of honesty and diversity? How much of what is written on Less Wrong, and how it is written, is an effect of the reputation system? How much is left unsaid? I do not doubt that reputation systems can work, in principle. If everyone involved was perfectly rational, with a clear goal in mind, a reputation system could provide valuable feedback. But once you introduce human nature, it might become practically unfeasible, or have adverse side-effects.
1NancyLebovitz13y
Perhaps we should have a social norm of asking anyone who says "I know I'll be downvoted for this" why they think so.
2wedrifid13y
I am going to stick with downvoting them regardless.
0XiXiDu13y
What's so bad about writing that you know that you'll be downvoted? Many of your comments on the recent meta-ethics threads have been downvoted (at least initially, haven't checked again). So you know that another comment that criticizes the moral theory of someone else is likely to be downvoted as well (I think you even wrote something along those lines). Saying that you are aware that what you are going to say will be downvoted provides valuable feedback. That you know that you are going to be downvoted doesn't mean that you know that you are wrong and decided to voice your wrongness again.
3wedrifid13y
Mild spaminess, unhealthy passive aggressive habit, unnecessary insult to the reader.
2NancyLebovitz13y
I find "I know I'll be downvoted" or "I know I'll be flamed" to be tiresome, even though I don't downvote them. I'd rather be left to form my own opinion relatively freshly. Also, (and I'm not saying this applied to wedrifid), I frequently find that IKIB* is attached to something which is either innocuous or that ends up being liked.
2Barry_Cotter13y
I will also continue to downvote them, but I'm more likely to explain why.
0persephonehazard13y
And, of course, being downvoted doesn't necessarily /mean/ that you're wrong.
0NancyLebovitz13y
I hadn't thought about that policy, and I wouldn't presume to ask you to change it.
0wedrifid13y
Why thank you. I've also made an exception to my general policy of downvoting all 'should' claims for norms that don't have my complete support. :)

Minor quibble: I think terms "rational" and "irrational" (and "rationalist", "rationality", etc.) tend to be overused, sometimes as vague "good/bad" qualifiers (I've been guilty of that). As a rule of thumb, I'd recommend against using those terms unless

  • You're using them in a narrow technical meaning, i.e. a rational utility-maximizing agent in economics, or

  • You're discussing "non-traditional mental skills" like changing one's mind, compensating for cognitive biases or dissolving confusion (i.e. not just being smart, open-minded and an atheist), or

  • You have above 10000 karma.

8wedrifid13y
Hear, hear! I get a niggling aversive reaction whenever I see those terms used when not absolutely necessary and one of irritation when I see them used as 'good/bad' qualifies. Even more so when the alleged 'rational' action is a subjective claim that I don't even necessarily agree with! And if those of us with 10k karma don't constrain our usage to the first two cases (and in the case of rationalist/rationality with reluctance even then) then shame on us/them!

There's too much talk of Bayesianism in fuzzy conspiracy terms and not enough "here's the maths. learn.".

also, I hate getting karma when I'd rather have a reply.

2NancyLebovitz13y
I could vote this up, but instead I'll say that it's especially annoying when I post articles which mostly get karma rather than replies.
0randallsquared13y
I believe "articles which mostly" would make that much clearer.
0NancyLebovitz13y
You're right. Corrected.
0[anonymous]13y
.
5gwern13y
Your own criticism is kind of fuzzy. What exactly does one write about? For example, would http://www.gwern.net/Modafinil#ordering-with-learning be the sort of Bayesian discussion you'd want to see, or is that too elementary and you'd rather something that looks like a later chapter of PT:tLoS?
4mstevens13y
That's the sort of thing I was thinking of. I want a whole series of content from elementary to advanced. I suppose what I'm calling for is LW to write a stats textbook with a LW angle on things. Of course possibly the answer is that such books already exist and I should go read them instead of LW.
2atucker13y
What would you call a LW angle on things in the context of a math textbook?
0mstevens13y
I'm not totally sure (I want to read the book!), but at the very least it'd have more real-world applications than the books on the subject I've looked at.
2David Althaus13y
Ähm, do you know of this introduction by Eliezer? Here is another one, which I found to very helpful, by komponisto If that does not suffice read the "Technical explanation" of ( I meant "from", albeit it's not that funny) Eliezer. And if you aspire to become a Jedi Bayesian, just read E.T. Jaynes himself.
7wedrifid13y
Technical Explanation of Eliezer. I'd like to see that. ;)
4David Althaus13y
;) In German there is simply one word for "by, from, of". Kinda handy.

1. Too much emphasis on "altruism" and treatment of "altruists" as a special class. (As opposed to the rest of us who "merely" enjoy doing cool things like theoretical research and art, but also need the world to keep existing for that to continue happening.) No one should have to feel bad about continuing to live in the world while they marginally help to save it.

2. Not enough high-status people, especially scientists and philosophers. Do Richard Dawkins and Daniel Dennett know about LW? If not, why not? Why aren't they here? What can we do about it? Why aren't a serious-looking design and the logo of an Oxford institute enough to gain credibility? (Exception that proves the rule: Scott Aaronson has LW on his blogroll, but he was reading OB before he was high-status, and so far as I am aware, hasn't ever commented on LW as opposed to OB.)

3. Too much downvoting for disagreement, or for making non-blatant errors.

4. It's not that there are too many meetup posts, it's that there are too few content posts by comparison.

5. I sometimes feel that LW is not quite nice enough (see point 3.). Visiting other internet forums quickly snaps me out of this and p... (read more)

Not enough high-status people, especially scientists and philosophers. Do Richard Dawkins and Daniel Dennett know about LW? If not, why not?

Well, to be blunt, arguing on public internet forums is not an effective way to accomplish anything much in practice. The only people who do it are those for whom the opportunity cost in time is low (and are thus necessarily underachievers) and those who find it enjoying enough to be worth the cost (but this is clearly negatively correlated with achievement and high status).

Also, arguing on the internet under one's real identity is a bad idea for anyone who isn't in one of these four categories: (1) those who already have absolute financial security and don't care what others will think of them, (2) those who instinctively converge towards respectable high-status opinions on all subjects, (3) those who can reliably exercise constant caution and iron self-discipline and censor themselves before writing anything unseemly, and (4) those who absolutely lack interest in any controversial topics whatsoever.

Not enough high-status people, especially scientists and philosophers.

High status people tend to be those whose actions are optimized to maximize status. Participating on Internet forums is not an optimal way to gain status in general. (Of course it can be a good way to gain status within particular forums, but by high-status people you clearly meant more widely-recognized status.)

(I disagree with Vladimir_M that "arguing on public internet forums is not an effective way to accomplish anything much in practice". In my experience it is a good way to get people interested in your ideas, further develop them and/or check them for correctness.)

Do Richard Dawkins and Daniel Dennett know about LW? If not, why not? Why aren't they here? What can we do about it? Why aren't a serious-looking design and the logo of an Oxford institute enough to gain credibility?

Probably not much we can do unless LW somehow gains widespread recognition among the public (but then we probably won't care so much about "not enough high status people"). I note that even the philosophers at FHI rarely participate here.

0curiousepic13y
I would be very interested in hearing why this is true, and the resource is at hand.
2Wei Dai13y
You can see here an explanation from Toby Ord why he decided not to continue a discussion despite some of us begging him to.
1Vladimir_Nesov13y
By the way, how far is (a saner rendering of) "moral realism" from simply a focus on "objective" in "subjectively objective values"? That is, any given agent can't escape from fixed moral truths no more than physical reality, even though there are other physical realities and agents with other goals. This doesn't look like a disagreement.
2Wei Dai13y
Toby mentioned that moral realism went together with value simplicity, so presumably he meant a version of moral realism that implies value simplicity, from which I infer that his position is not close to "subjectively objective values".
3Vladimir_Nesov13y
Toby's comment doesn't strongly imply that he believes in value simplicity though. On the other hand, "value simplicity" can be parsed as correct as well, in the sense of pointing to human minds or even to own intuition and saying "values like this" (I weakly guess a moral realist would just use own intuition in this case instead of noticing it), so this needs further disambiguation. :-)
8steven046113y
Are people doing specific things to make you feel bad about "continuing to live in the world", or does mere discussion of altruist-relevant topics among LW altruists make you feel that way?
4[anonymous]13y
.
1komponisto13y
I was thinking of exchanges like this, in which my interlocutor took it for granted that musical taste is analogous to color preferences (and therefore of no greater intellectual interest), and displayed no interest in updating his beliefs on this question (I assume because of an unverbalized feeling that the topic isn't prestigious enough to think this deeply about). Generally, what seems to happen is an inescapable spiral of "my heuristics tell me this comment is low-status, so I'm not going to read it carefully enough to notice any argument it may contain that my heuristics are wrong".

There are way to many amazing posts with very little karma and mediocre posts with large amounts of karma.

Not enough productive projects related to the site, like site improvements and art. The few that do show up get to little attention and karma.

To much discussion about things like meetups and growing the community and converting people. Those things are important but they dosn't belong on LW and should probably have their own site.

There is a category of religiously inspired posts that creep me out and set of cult alarms. It contains that post about staring from Scientology and that Transcendental meditation stuff that while I found it interesting and perhaps useful doesn't seem to belong on LW and now recently these Mormon posts abut growing organizations. shudder

There is a category of religiously inspired posts that creep me out and set of cult alarms. It contains that post about staring from Scientology and that Transcendental meditation stuff that while I found it interesting and perhaps useful doesn't seem to belong on LW and now recently these Mormon posts abut growing organizations. shudder

What I'm about to say has been said before, but it bears repeating. What exactly about all this stuff is setting off cult alarms for you? I had a similar problem with those posts as well, until I actually went and questioned the cult alarm in my head (which was a gut reaction) and realized that it might not be a rational reaction. Just because some scary group does something does not make it a bad thing, even if they're the only people that do it -- reversed stupidity is not intelligence. And a number of those things suggested sound like good, self-improvement suggestions, which are free of religious baggage.

In general, when you're creeped out by something, you should try to figure out why you're being creeped out instead of merely accepting what the feeling suggests. Otherwise you could end up doing something bad that you wouldn't have done if you'd thought it through. Which is of course the basic purpose of the teachings on this site.

2Armok_GoB13y
I don't know how the cult alarms work, they're intuitive. I know all those things and indeed it's probably a false alarm but I thought I should mention it anyway. Still, if religious orgs have anything to say to rationalists about rationality then somehting, somewhere, is very very wrong. That doesn't necessarily mean it's not the case or that we shouldn't listen to them, but at the very least we should have noticed the stuff they're saying on our own long ago. I never actually stated that I accepted what the feeling said, only that I HAD the feeling. I am in fact unsure of what to think and thus I'm trying to forward the raw data I'm working from (my intuitions) rather than my interpretation of what they mean. I should have made that clearer. Besides, regardless of if the feeling of being creeped out is justified or not the fact they creep people out is a problem and they should try to communicate the same ideas in ways that don't creep people out so much. I don't like being creeped out.
6gscshoyru13y
Ah, ok, I misunderstood you then. Sorry, and thanks for clearing that up. I don't agree that religious organizations having something to say to rationalists about rationality is a bad thing -- they've been around much, much longer than rationalists have, and have had way more time to come up with good ideas. And the reason why they needed suggest it instead of working it out on their own is probably because of the very thing I was trying to warn against -- in general, we as a community tend to look at religious organizations as bad, and so tend to color everything they do with the same feeling of badness, which makes the things that are actually good harder to notice. I also do not like being creeped out. But I assume the creepiness factor comes from the context (i.e. if the source of the staring thing was never mentioned, would it have been creepy to you?) But this is probably only doable in some cases and not others (the source of meditation is known to everyone) and I'm not entirely sure removing the context is a good thing to do anyways, if all we want to do is avoid the creepiness factor. I'll have to think about that. Being creeped out and deconstructing it instead of shying away is a good thing, and trains you to do it more automatically more often... but if we want the ideas to be accepted and used to make people stronger, would it not be best to state them in a way that is most acceptable? I don't know.
3Armok_GoB13y
Since this seems specifically directed to me I'll say "I agree" in this actual comment rather than only upvoting. I agree.
5Eugine_Nier13y
Nick Szabo has a good essay about why we should expect (religious) traditions to contain valuable insights.
4nazgulnarsil13y
seconded. tradition as a computational shortcut is a very important insight that I have tried (and mostly failed) to communicate to others. more generally, memes take advantage of consistent vulnerabilities in human reasoning to transmit themselves. the fact that they use this propagation method says nothing about the value of their memetic payload. we should pay attention to successful memes if we want to generate new successful memes.
0Emile13y
Your first and second paragraph somewhat contradict each other - I agree that some traditions may be undervalued by people who'd prefer to reinvent things from whole cloth (from a software engineering perspective: rewriting a complex system you don't understand is risky), but as you say, traditions may have been selected for self-relication more than for their actual value to humans. If you consider selection at the family, village or tribe/nation level, maybe tradition's "fitness" is how much they help the people that follow them, but many traditions are either quite recent, or evolved in a pretty different environment. So I don't know how much value to attribute to tradition in general.
0nazgulnarsil13y
More than a teenage atheist typing in all caps, less than an evangelical :p But seriously, I think us geeky types tend toward the a priori solution in far too many circumstances. We like things neat and tidy. Untangling traditional social hierarchies and looking for lessons seems to appeal to very few.
2Armok_GoB13y
Hmm, I just came up with a good framing metaphor to make it seem less creepy; Biomimicrying viruses for usage in gene therapy. Not very useful for purposes other than that thou.
0[anonymous]13y
Not necessarily. I don't find it surprising that we have different priorities than religious organizations when it comes to instrumental rationality, and there are also a lot more of them than there are of us, and they've been working longer. If we'd had thousands of people working for several generations on the specific problem of outreach, and they still had a nontrivial amount of advice to give us, then you'd be right, but that's just not the case.
6JohnH13y
I actually agree with this statement.
5badger13y
I think meetups and discussions about community belong on LW, but occasionally these seem to presume "we've figured all these things out, now we just have to spread them". Even the usage of "we" can be dangerously setting LW readers apart from others. If there is an overarching goal to a LW-based community, it would be better framed as how to be a capable and informative group that others would be interested in than how to attract people per se.
0Armok_GoB13y
Yea... Still, there's WAY to much of it relative actual content.
4Kevin13y
The meditation post wasn't about Transcendental meditation.
4atucker13y
New meetup groups mostly draw on the site, so exiling them to a different site will probably kill them off. If you had them just post on the site when they're new, you'd wind up with pretty much exactly what we have right now -- the established groups don't regularly post meetup notices.

It is often said that one of our core values is being able to change your mind and admit when you are wrong. That process involves questioning. Am I wrong about X? As a community, we should not punish questioning, and yet my own experience suggests we do.

4XiXiDu13y
Yup, I personally had a post downvoted to -11 where I honestly asked "the Less Wrong community to help me resolve potential fallacies and biases in my framing of the above ideas."

I find that Less Wrong is a conflation of about six topics:

These don't all seem to fit together entirely comfortably. Ideally, I'd split these into three more-coherent sections (singularitarianism and AI, philosophy and epistemic rationality, and applied rationality and community), each of which I think could probably be more effective as their own space.

Creepily heavy reliance on torture-based what-if scenarios.

If you try to do moral philosophy, you inevitably end up thinking a lot about people getting run over by trolleys and such. Also if you want to design good chairs, you need to understand people's butts really well. Though of course you're allowed to say it's a creepy job but still enjoy the results of that job :-)

6PlaidX13y
I haven't read TOO much mainstream philosophy, but in what I have, I don't recall even a single instance of torture being used to illustrate a point. Maybe that's what's holding them back from being truly rational?
6TimFreeman13y
I agree. I wrote the article you're citing. I was hoping that by mocking it properly it would go away.
5Dreaded_Anomaly13y
One of the major goals of Less Wrong is to analyze our cognitive algorithms. When analyzing algorithms, it's very important to consider corner cases. Torture is an example of extreme disutility, so it naturally comes up as a test case for moral algorithms.

I've heard that before, and I grant that there's some validity to it, but that's not all that's going on here. 90% of the time, torture isn't even relevant to the question the what-if is designed to answer.

The use of torture in these hypotheticals generally seems to have less to do with ANALYZING cognitive algorithms, and more to do with "getting tough" on cognitive algorithms. Grinding an axe or just wallowing in self-destructive paranoia.

If the point you're making really only applies to torture, fine. But otherwise, it tends to read like "Maybe people will understand my point better if I CRANK MY RHETORIC UP TO 11 AND UNCOIL THE FIREHOSE AND HALHLTRRLGEBFBLE"

There's a number of things that make me not want to self-identify as a lesswrong user, and not bring up lesswrong with people who might otherwise be interested in it, and this is one of the big ones.

0Bongo13y
Not necessarily even wrong. The higher the stakes, the more people will care about getting a winning outcome instead of being reasonable. It's a legit way to cut through the crap to real instrumental rationality. Eliezer uses it in his TDT paper (page 51):

As a relative newcomer I've found it quite hard to get a sense of the internal structure of less wrong and the ideas presented. Once you've looked at the 'top' posts and the obvious bits of sequences a lot of it is quite unstructured. Little thing like references to the 'Bayesian conspiracy' or the paperclip AI turn up frequently without explanation, and are difficult to follow up.

4XiXiDu13y
References & Resources for LessWrong. Maybe it helps a little bit, although there is still a lot to do. Once I got more time again I'll overhaul it, add some missing references and remove unnecessary items.
2badger13y
This is what the wiki was intended to address. Has the wiki been helpful or how could it be improved? Is it just a matter of being aware it's a resource?
2fburnaby13y
Thanks for mentioning this. I shared FiftyTwo's complaint until now. The wiki section is placed in a nice location, but I haven't visited it in more than a year. It seemed to be too content-poor. A quick glace suggests that this has now been expended since then, and so may be helpful now.

In roughly decreasing order of annoyance:

A varying degree of belief in utilitarianism (ranging from a confused arithmetic altruism to hardcore Benthamism) seems to be often taken for granted, and rarely challenged. The feeling I get when reading posts and comments that assume the above is very similar to what an atheist feels when frequenting a community of religious people. The fix is obvious, though: I should take the time to write a coherent, organised post outlining my issues with that.

A little Singularitarianism, specifically the assumption that self-improving AI = InstantGod®, and that donating to SIAI is the best possible EV for your cash. This isn't a big deal because they tend to be confined to their own threads. (Also, in the thankfully rare instance that someone brings up the Friendly AI Rapture even when it brings nothing to the conversation, I get to have fun righteously snarking at them, and usually get cheap karma too, perhaps from the other non-Singularitarians like me.) But it does make me feel less attached and sympathetic to other LessWrongers.

Of late, there's a lot of concern about what content should be on this site and about how to promote the site and its men... (read more)

5Risto_Saarelma13y
Are there any English-language discussion sites that aren't very Anglo-centric? The more troubling thing for me is the feel that we're just bouncing around ideas that flow out of Silicon Valley instead of having multiple cultural centers generating new ideas with their own slant on stuff and having a back-and-forth. There could be interesting communities that are in Russian, Chinese, German, French or Spanish which are producing interesting ideas and could be aligned with LW if someone would bother to translate stuff, or then we could just be in a situation where the interesting new stuff that's roughly compatible with the LW meme cluster just happens to emerge mostly from the Anglosphere. The split between analytic philosophy done in English and continental philosophy done in French and German is a bit similar. And that seems to have led into mutual unintelligibility at some conceptual level, not because of language. As far as I can tell, the two schools of philosophy don't have much use or appreciation for each others' stuff even when it gets translated. There seems to be some weird deep intertwining going on with language, culture and the sort of philosophy that gets produced, and LW stuff might be subject to it as well. It's odd in general that I feel like I have a much better idea about what's going on in the US than in most of Europe since the primary language of most Americans is one I can understand and the primary language of most Europeans is one I can't.
2steven046113y
This simply isn't true. See, for example, the reception of this post.
4NihilCredo13y
Altruism is a common consequence of utilitarian ideas, but it's not altruism per se (which is discussed in the linked post and comments) that irks me; rather, it's the idea that you can measure, add, subtract, and multiply desirable and indesirable events as if they were hard, fungible currency. Just to pick the most recent post where this issue comes up, here is a thread that starts with a provocative scenario and challenges people to take a look at what exactly their ethical systems are founded on, but - with only a couple of exceptions, which include the OP - people just automatically skip to wondering "how could I save the most people?" (decision theory talk), or "what counts as 'people', i.e. those units of which I should obviously try to save as many as possible?". There's an implicit assumption that any sentient being whatsoever = 1 'moral weight unit', and it's as simple as that. To me, that's insane. Edit: The next one I spotted was this one, which is unabashedly utilitarian in outlook, and strongly tied to the Repugnant Conclusion.
0steven046113y
Fair enough; I guess komponisto's comment in this thread primed me to misinterpret that part of your comment as primarily a complaint about utilitarian altruism.
-6wedrifid13y

I don't like meetup posts getting in the way of actually interesting content.

I don't like the heavily blog inspired structure - I want something more like a book of core ideas, and perhaps a separate forum for discussing them and extending the core. At the moment it's very hard to "work your way in".

It would be nice to know more about other users rather than just their karma.

Content seems quite light and of low value at the moment. I may well be contributing to this.

I don't like the overlap between SIAI and LW. I'd like a clearer distinction between the two projects even if the people are the same.

I miss MoR and wish EY would finish it.

I like being notified of valuable new content via email, it makes me sad LW doesn't offer this.

6wedrifid13y
I don't share that preference but it seems solvable in 2 seconds. The time it takes to type "cntrl-T rss to email". http://www.feedmyinbox.com/. Defining and implementing a 'valuable' metric could be slightly more difficult. An RSS feed for any comments that reach +5 could be worth implementing!
5wedrifid13y
It's good to hear a relatively new user say this. Just because it makes me feel less like the old guy reminiscing about the (selectively remembered) glory days and complaining about 'kids these days'. ;)
3NancyLebovitz13y
I agree. I don't know whether it's that the more obvious stuff has been done, or that people aren't doing the work to extend the frontiers.
4wedrifid13y
That latter. I can be confident in this because I have a mental list of all sorts of posts that I would make if I had unlimited time and motivation. That I choose not to is an indication of priorities, not an indication that there is nothing left at the boundaries.
8[anonymous]13y
Maybe you could do a quick post listing the sorts of posts you would make if you have unlimited time and motivation.
0atucker13y
Aaaah holy crap that sounds awesome! Like, you're almost making me want to get a PhD (from there)!
0wedrifid13y
The 'sequences' link seems to cover this. The difficulty seems to be that reading the book-like format is not nearly as easy to motivate oneself to do. In the last week I have gone through and converted all of the hundreds of core Eliezer posts into audio format and have them running nearly constantly on my ipod for the purpose of revision. It's going to take days to get through them all even at that constant rate of consumption! I highly recommend this as a way to 'work your way in'. It is not quite the same as reading all of the text but the cost is far, far lower. PS: For obvious reasons I just had to upvote your other comment!
2mstevens13y
I find the sequences hard to penetrate. I've actually found MoR to be a much better introduction. But either way I'd like to see them more prominent on the site.
1Vaniver13y
It seems like you're not interested in a core, then, but a popularization. (This is intended as a clarification, not an insult.) If one wanted an introduction to Christianity, just opening up the Bible is not a good plan.
1mstevens13y
That's somewhat true - I think a good introduction is a key part of what I'm looking for. However I also like the fact that MoR is a well structured work (start reading at the beginning, continue to the end) with some sort of consistent editorial style, which the sequences seem to lack.
0David_Gerard13y
BTW, you should pop along to a London meetup, even if only to boggle slightly. A nice bunch.
1mstevens13y
But I suspect you're all disturbingly humanoid! I know you are!
1David_Gerard13y
You'll see me sipping water in a real ale pub. A deeply disturbing sight.
2persephonehazard13y
...who are you and what have you done with, you know, /you/?!?
0David_Gerard13y
Someone attempting to keep up with a room full of people smarter than me :-) I point you at the welcome thread!
0XiXiDu13y
I think this is a largely overestimated concept, especially on LW. I doubt most people here are "smarter" than average Joe. A lot of it is due to education, a difference of interest, and a little more ease when it comes to symbol manipulation. Surely there are many more factors, like the ability to concentrate, not getting bored too quickly, being told as a child that one can learn anything if one tries hard enough etc., but little has to do with insurmountable hardware limitations. Eliezer Yudkowsky recently wrote: I haven't heard of any evidence that would suggest that there are human beings who can't understand linear algebra. I myself have not yet arrived at linear algebra, because I didn't bother to learn any math when I was a teenager, but I doubt that it is something only superhuman beings can understand. I would go as far as to bet that you could teach it to someone with down syndrome. Take for example the number 3^^^^3. Can I hold a model of 3^^^^3 objects in my memory? No. Can I visualize 3^^^^3? No. Does that mean that I am unable to fathom some of its important properties, e.g. its scope? No. Someone who has no legs can't run faster than you. Similar differences are true about different brains, but we don't know enough about brains, or what it means to understand linear algebra, to indiscriminately claim that someone is "smarter"...
8persephonehazard13y
I'm not convinced anybody could teach me to understand linear algebra. Or maybe what I mean by that is that I'm not convinced of my own ability to understand linear algebra, which may be a different thing. I have trouble with maths. More specifically, I have trouble with numbers. What I experience when faced with lots of numbers is akin to how people with dyslexia often describe trying to parse lots of written text - they swim and shift beneath my eyes, and dissolve into a mass of meaningless gobbledegook that I can't pick any sense from. And then after a while, even if I've ploughed through some of this, I start to get what I can only describe as "number fatigue" and things that previously I'd almost started to comprehend seem to slip out from my grasp. And, when asked to do simple maths, I panic and fly into what is pretty much an anxiety attack. Which, of course, means that I'm not thinking clearly enough to untangle it all and try to start making sense of it. Maths feels utterly, utterly impenetrable to me. Half the time I can't even work out what the necessary sum is - recent examples include my having no notion of the calculations required for aspect ratio or 10% of a weight in stones and pounds, but this also applies to much simpler things, like the time I couldn't figure out how to calculate the potential eventual fundraising total from the time elapsed, the time remaining and the money so far achieved. I realise that in a community like this I'm going to stick out like a sore thumb, mind you ;-)
2Paul Crowley13y
This all sounds less like a lack of innate ability and more like a barrier of fear. Not to say that can't be just as disabling.
2persephonehazard13y
Certainly some of it is. The anxiety and fluster and horrible panic feeling is certainly emotional, and the "number blindness" thing is probably related too. It's much, much worse if there's anyone else around - the only thing more embarrassing than knowing I've failed simple arithmetic is failing simple arithmetic when other people who might assume I'm moronically stupid can see me doing it. And of course that makes me a nightmare to teach, because I'm horribly resistant to learning maths because I know I'll fail and look stupid and whoever it is will think I'm thick. You of all people have encountered that in me! Struggling to parse strings of numbers, though, can happen no matter how calm and unpressured and private I am. I've emailed myself things like my debit card number so that I can just cut and paste them when I buy things, because I can't always reliably type them in by looking at the card.
1Desrtopa13y
It could be a case of discalculia.
2persephonehazard13y
That's certainly entirely plausible, and something my mother (a primary school teacher of a quarter-century's experience, who's known a lot of children well) has always suspected. I've never had it checked out, though. Maybe I should. ETA - particularly as I've just had a look at the wikipedia article and every single thing in the symptoms list applies to me to some degree. I'm even a pretty good writer. Good grief.
1XiXiDu13y
Check the following links, here is my homepage from when I was 18 and this is another page from that year. Looks more like something made by a 14 year old, doesn't it? And since Desrtopa mentioned discalculia, your case might be stronger, but it took me 3 attempts to figure out how old I was in 2002 :-) There do exist neurological deficits that prevent people from acquiring certain skills, understand some concepts and reach certain performance levels, but I wouldn't jump to a conclusion in your case. A lot of it might very well has to do with what you believe to be the case, rather than what is actual. I haven't hit any barrier yet. And I only learnt to read analog clocks when I was around 14. I admit that I can't put myself in your position, so maybe I am wrong and you should stop worrying about mathematics. I am only saying that you might as well stop caring, but not give up. In other words, do not panic, just try it and don't expect to succeed. Start small, think about the most basic problem for as long as necessary, without feeling coerced to understand it. Use objects and drawings to approach the problem. Read up on various different sources explaining the same problem. Do not stop reading, or listening to explanations when you feel that you can't follow anymore, just read it over and over again. Then stop for a few hours or days and think about it again. And remember not to push yourself to understand it, you just do it in your spare-time, for fun. If you feel overwhelmed, just forget about it and get back to it later. Write it down, print it out and plaster the walls in your bedroom with it so that you don't need any willpower to approach the problem, the problem will approach you. You have all the time you need, even if it takes decades to understand that one simple problem. It also helps to remember that almost everyone knows someone who is much better at something. Many people learn to play a musical instrument and never expect to become a professional
1David_Gerard13y
I wonder if that makes a difference in practical terms. There's all sorts of potential in one's genes, but one has the body, brain and personal history one ends up with. What I mean is no longer feeling like the smartest person in the room and quite definitely having to put in effort to keep up. I first encountered humans who couldn't understand basic arithmetic at university, in the bit of first-year psychology where they try to bludgeon basic statistics into people's heads. People who were clearly intelligent in other regards and not failures at life, who nevertheless literally had trouble adding two numbers with a result in the thirties. I'm still boggling 25 years later, but I was there and saw it ...
7XiXiDu13y
When I first saw a fraction, e.g. 1/4, I had real trouble to accept that it equals .25. I was like, "Uhm, why?"...when other people are like, "Okay, then by induction 2/4=.5"...it's not that I don't understand, but do not accept. Only when I learnt that .25 is a base-10 place-value notation, which really is an implicit fraction, with the denominator being a power of ten, I was beginning to accept that it works (it took a lot more actually, like understanding the concept of prime factorization etc.). Which might be a kind of stupidity, but not something that would prevent me from ever understanding mathematics. The concept of a function is another example: * f:X->Y (Uhm, what?) * f(x) : X -> Y (Uhm, what?) * f(x) = x+1 (Hmm.) * f(1) = 1+1 (Okay.) * y = f(x) (Hmm.) * (x, y) * (x, f(x)) * (1,2) (Aha, okay.) * (x,y) is an element of R (Hmm.) * R is a binary relation (Uhm, what?) * x is R-related to y (Oh.) * xRy * R(x,y) (Aha...) * R = (X, Y, G) * G is a subset of the Cartesian product X × Y (Uhm, what?) ...so it goes. My guess is that many people appear stupid because their psyche can't handle apparent self-evidence very well.
3XiXiDu13y
If only by its effect on yourself and other people. If you taboo "smarter" and replace it with "more knowledgeable" or "large inferential distance", you do not claim that one can't reach a higher level: "That person is smarter than you." = Just give up trying to understand, you can't reach that level by any amount of effort. vs. "That person is more knowledgeable than you." = Try to reduce the inferential distance by studying hard. I believe that to be the case with literally every new math problem I encounter. Until now I have been wrong each time. Basic arithmetic can be much harder for some people than others because some just do the logic of symbol manipulation while others go deeper by questioning axiomatic approaches. There are many reasons for why people apparently fail to understand something simple, how often can you pinpoint it to be something that can't be overcome?
2persephonehazard13y
See above, but I am basically one of those people. My own intelligence lies in other areas ;-)
0XiXiDu13y
Thinking about this a bit longer, I think mathematical logic is a good example that shows that their problem is unlikely to be that they are fundamentally unable to understand basic arithmetic. Logic is a "system of inference rules for mechanically discovering new true statements using known true statements." Here the emphasis is on mechanical. Is there some sort of understanding that transcends the knowledge of logical symbols and their truth values? Is arithmetic particularly more demanding in this respect?

I am obsessed with group epistemology just enough to suggest the probably-bad idea that a much-better-written version of this post should maybe be posted to the main LW section, and maybe even once a month. This allows people who don't constantly check LW discussion to get in on the fun, and if we want to avoid evaporative cooling those are just the kind of people whose critiques we most want. Not only is this perhaps a good idea for group epistemology but it is also a good signal to the wider aspiring-to-sanity community.

We'll see what the response to this post is, and plan from there, or not.

(ETA: Perhaps a general (both positive and negative) feedback post with relatively lax comment quality expectations would be better; as User:atucker points out in another comment on this post, there is utility to be had in positive feedback.)

1Dorikka13y
I think that one should be posted once per month, but not necessarily in the main LW section. My main reason for this is aesthetics, that I don't really think that such a meta post really 'belongs' on the main page. However, I'd reverse this opinion if a substantially greater number of users saw new main page posts than saw discussion posts.
[-][anonymous]13y90

.

Before the discussion section was implemented I envisioned more of a "episodes from rationality in everyday life" rather than the Top Posts Junior it has partly become. I think there are a large fraction of LWers who are interested in discussing the low hanging fruit of everyday life but have been discouraged by the response to those types of posts.

I dislike the discussion/main section divide. I have to check two places for recent comments/posts. Plus, every time I want to make a post I have to decide which section to post it in, and that seems to be a not insignificant mental cost. Actually I can't really tell which posts belong where, so I've ended up posting all of them in discussion "to be safe".

4evec13y
Does checking http://lesswrong.com/r/all/new solve the problem of checking two places?
0steven046113y
Doesn't the choice of top-level post vs open thread comment have the same problems?
0Will_Newsome13y
(Also, people like me can break the implicit rules about what to post where for not-obviously-prosocial reasons, like me posting the thing about meta-ethics to Main for experimental reasons despite it being Discussion material.)

My chief complaint is that almost none of the other articles here are as engaging, compelling, or fun as Eliezer's sequences. Which I have finished reading. :(

1Kutta13y
The "Top Articles" list has a multitude of great articles and relatively little Eliezer, for lots of pages.

I think that it would be helpful to make a "What do you like about Less Wrong" post. Mostly because phrasing things negatively frames them negatively, and knowing what people like is also helpful in making the site better.

8Will_Newsome13y
I agree, I will make that post in a few days and also link back to this post in order to maximize the number of different viewers and avoid accidentally covering things simultaneously. If anyone thinks those reasons are dumb then they can just go ahead and make a "What do you like about Less Wrong?" post right now and save me the trouble. :) Combining the positive and negative posts into a single post didn't actually occur to me (embarrassingly). I'm not sure if combining them next time would be better...?

I vote against combining them, it's good to stay focused.

Not enough LW codebase programmers.

I'd like to see more people questioning orthodox assumptions, and generally more radical arguments, yet without compromising LW standards of rigor. I feel like people are too afraid to stick their necks out and seriously argue something that goes against the majority/high-status views.

3steven046113y
I think LW has a lot of low-quality critics, which in turn may be causing it to underestimate the potential for criticism in the kinds of areas that tend not to attract low-quality critics.

Ideally I'd like to see a version of the site where the upvotes and downvotes of people I tend to agree with affect what is displayed to me, rather than the upvotes and downvotes of everybody. I'd be happy if I saw votes made by 90% or maybe even 99% of the users, but there's a small minority of users who I'd rather not see. These are people who use downvoting politically against specific users, instead of using them as statements about specific articles.

Implementing this would require some math and perhaps significant CPU, so it may not be worthwhile.

One thing that bothers me about LW is that comments and discussion posts had their karma divided by ten (rather than some more moderate number). Surely that has to have taken away too much of the incentive to post good comments, as well as made karma less informative as the measure of general quality of thought that some are taking it for.

4Wei Dai13y
I personally wish that people would more often gather their thoughts into coherent arguments and then made into posts, instead of spreading them over many comments. I've tried to encourage people to do this on individual occasions, but mostly without success.
1Nornagest13y
I think this ultimately comes down to whether and to what degree we want to encourage quality over volume in top-level posts. On the whole I'm pretty happy with the current balance, but several of the recurring complaints about this site (i.e. meetup post density) do seem to stem from a lack of top-level volume, so I can see an argument for changing the weighting.
0wedrifid13y
I'm not entirely sure what you are trying to say here. To be clear: All comments and discussion posts get one karma per vote. Posts on the main page get a times 10 multiplier. Which part of this do you object to?
7steven046113y
Yes, I was consciously trying to frame it in the opposite way from how it's usually framed, because I'm worried that the way it's usually framed highlights the benefits more than the costs.
0Alicorn13y
I think he may have been making a joke.
0steven046113y
The phrasing was intentionally unusual and apparently confusing (for which I apologize), but the point was meant seriously, though I'm not confident of it. Multiplying main-page karma by 10 means the same as dividing the rest by 10, unless people care about absolute rather than relative amounts of karma.
0Alicorn13y
Oh, I thought it was intended to be about status-quo bias or something.

What bothers me is that the real agenda of the LessWrong/Singularity Institute folks is being obscured by all these abstract philosophical discussions. I know that Peter Thiel and other billionaires are not funding these groups for academic reasons -- this is ultimately a quest for power.

I've been told by Michael Anissimov personally that they are working on real, practical AI designs behind the scenes, but how often is this discussed here? Am I supposed to feel secure knowing that these groups are seeking the One Ring of Power, but it's OK because they'... (read more)

0[anonymous]13y
Keep your friends close...
-5timtyler13y

While I'm griping:

I have always been puzzled and somewhat disappointed by the reception of this post. Almost all the comments seemed to fall within the following two categories: either they totally didn't understand the post at all, or thought its main point was so utterly obvious that they had trouble understanding why I had bothered to write it.

There seemed to be very few people in the targeted intermediate group, where I myself would have been a year before: those for whom the main idea was a comprehensible yet slightly novel insight.

4badger13y
The issue is people who found it comprehensible yet slightly novel are the least likely to comment. There isn't that much they can add. So, here is a retroactive response from me: Thanks! I've been vaguely aware of this, but it's nice to see it laid out explicitly.
1gwern13y
OK. (FWIW, I upvoted that when you posted it and thought it was a very nifty post that drew out the implications of something I thought I understood already.) So what does this imply? Are said implications a problem? How would one fix said problems?
0komponisto13y
I suppose the main implication is that the readers I was targeting make up a smaller proportion of the LW readership than I had realized. Perhaps the only "fix" is for me to update my estimate of the relative size and influence of "my" audience within the general LW population, so as to better predict reaction to my posts. (Thanks for the positive feedback, by the way.)
1gwern13y
Those are good conclusions. I would have added that it'd be a good idea to be clear about who your audience is and how you can target them. This avoids alienating the 'experts', who can see the disclaimers' avowed target group, reason they are not in it and either stop reading or read it as an example of pedagogy. You can also try to target advanced outsiders by submitting to places like Hacker News or Reddit, something which has worked fairly well for my own 'beginner' pieces.

On the subject of powerful self-improving AI, there does not seem to be enough discussion of real-world limitations or chances for manual override on 1. the AI integrating computational power and more importantly 2. the AI manipulating the outside world with limited info and no dedicated or trustworthy manipulators, or manipulators weaker than Nanotech God. I no longer believe that 1 is a major (or trustable!) limit on FOOM since an AI may be run in rented supercomputers, eat the Internet, etc but 2 seems not to be considered very much. I've seen some clai... (read more)