Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Happiness Is a Chore
Comment author: NancyLebovitz 20 December 2017 04:10:57PM 0 points [-]

I'm somewhat annoyed that this claims there's a solution to becoming happier, goes on at some length, and doesn't include the solution.

In response to comment by magfrump on Ureshiku Naritai
Comment author: NancyLebovitz 13 April 2010 03:07:50AM 0 points [-]

Sorry-- I should have gotten back to you sooner.

What happened with your comment above was that it seemed like an attempt to take charge of my emotions, and that's an extreme hot-button issue for me.

Also, my original comment was pushing things a little in the wrong direction-- putting too much emphasis on it being my theory.

Comment author: NancyLebovitz 20 December 2017 03:35:23PM 1 point [-]

So, some years later, and I'm surprised I was upset. I consider this to be progress.

Comment author: Habryka 22 September 2017 07:57:19AM 2 points [-]

I think I've figured it out. Some email servers have very strict spam requirements, and I hadn't set up our MX records properly (https://www.wikiwand.com/en/MX_record). This caused the emails to go through for a large majority of users, but not some who had custom domain setups with strong spam filters. This should be fixed now.

Really sorry for the trouble.

Comment author: NancyLebovitz 22 September 2017 09:35:17AM 0 points [-]

I'm in!

Thanks very much.

Comment author: Habryka 21 September 2017 07:16:06PM 0 points [-]

Sorry, there was a miscommunication at an earlier point. We did not send out password-reset emails to everyone, however you can request a password-reset email in the login form on the new LessWrong, which should work well.

Comment author: NancyLebovitz 21 September 2017 10:51:58PM 0 points [-]

I've done that. Still haven't gotten an email. I've checked my spam folder.

Comment author: NancyLebovitz 21 September 2017 04:48:21PM 0 points [-]

I didn't get the password reset email.

Comment author: NancyLebovitz 20 September 2017 01:01:42PM 0 points [-]

LW2.0 doesn't seem to be live yet, but when it is, will I be able to use my 1.0 username and password?

Comment author: NancyLebovitz 18 September 2017 11:30:09PM 0 points [-]

"One obvious candidate for such a generic cost effective safety intervention is a small but fully autonomous city on mars, or antarctica, or the moon, or under the ocean (or perhaps four such cities, just in case) that could produce food independently of the food production system traditionally used on the easily habitable parts of Earth."

That sort of thing might improve the odds for the human race, but it doesn't sound like it would do much for the average person who already exists.

Comment author: richardbatty 17 September 2017 08:19:47PM 1 point [-]

"I think communities form because people discover they share a desire"

I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.

"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"

That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it's going to be difficult for it to solve some of the world's most important problems.

Perhaps we have different goals in mind for lesswrong 2.0. I'm thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you'd care less about appealing to audiences outside of the community.

Comment author: NancyLebovitz 17 September 2017 08:50:14PM 3 points [-]

I'm fond of LW (or at least its descendants). I'm somewhat weird myself, and more tolerant of weirdness than many.

It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.

From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.

The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?

I'm hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.

Comment author: DragonGod 17 September 2017 09:24:07AM *  1 point [-]

Oh, okay. I still think we want to disincentivise downvoting though.


  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.


  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I suggest users lose 40% of the karma they deduct (since you want to give different users different weights). For example, if you downvote someone, they lose 5 karma, but you lose 2 karma.

Comment author: NancyLebovitz 17 September 2017 07:32:02PM 2 points [-]

How about the boring simplicity of having downvote limits? Maybe something around one downvote/24 hours-- not cumulative.

If you're feeling generous, maybe add a downvote/24 hours per 1000 karma, with a maximum or 5 downvotes/24 hours.

Comment author: richardbatty 17 September 2017 06:55:47PM 6 points [-]

You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.

On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

Comment author: NancyLebovitz 17 September 2017 07:27:56PM 3 points [-]

My impression is that you don't understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there's a venue that suits them-- the venue is necessary, but stays empty unless the desire comes into play.

" I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned."

Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?

View more: Next