Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Habryka 22 September 2017 07:57:19AM 1 point [-]

I think I've figured it out. Some email servers have very strict spam requirements, and I hadn't set up our MX records properly (https://www.wikiwand.com/en/MX_record). This caused the emails to go through for a large majority of users, but not some who had custom domain setups with strong spam filters. This should be fixed now.

Really sorry for the trouble.

Comment author: NancyLebovitz 22 September 2017 09:35:17AM 0 points [-]

I'm in!

Thanks very much.

Comment author: Habryka 21 September 2017 07:16:06PM 0 points [-]

Sorry, there was a miscommunication at an earlier point. We did not send out password-reset emails to everyone, however you can request a password-reset email in the login form on the new LessWrong, which should work well.

Comment author: NancyLebovitz 21 September 2017 10:51:58PM 0 points [-]

I've done that. Still haven't gotten an email. I've checked my spam folder.

Comment author: NancyLebovitz 21 September 2017 04:48:21PM 0 points [-]

I didn't get the password reset email.

Comment author: NancyLebovitz 20 September 2017 01:01:42PM 0 points [-]

LW2.0 doesn't seem to be live yet, but when it is, will I be able to use my 1.0 username and password?

Comment author: NancyLebovitz 18 September 2017 11:30:09PM 0 points [-]

"One obvious candidate for such a generic cost effective safety intervention is a small but fully autonomous city on mars, or antarctica, or the moon, or under the ocean (or perhaps four such cities, just in case) that could produce food independently of the food production system traditionally used on the easily habitable parts of Earth."

That sort of thing might improve the odds for the human race, but it doesn't sound like it would do much for the average person who already exists.

Comment author: richardbatty 17 September 2017 08:19:47PM 1 point [-]

"I think communities form because people discover they share a desire"

I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.

"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"

That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it's going to be difficult for it to solve some of the world's most important problems.

Perhaps we have different goals in mind for lesswrong 2.0. I'm thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you'd care less about appealing to audiences outside of the community.

Comment author: NancyLebovitz 17 September 2017 08:50:14PM 3 points [-]

I'm fond of LW (or at least its descendants). I'm somewhat weird myself, and more tolerant of weirdness than many.

It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.

From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.

The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?

I'm hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.

Comment author: DragonGod 17 September 2017 09:24:07AM *  1 point [-]

Oh, okay. I still think we want to disincentivise downvoting though.

Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I suggest users lose 40% of the karma they deduct (since you want to give different users different weights). For example, if you downvote someone, they lose 5 karma, but you lose 2 karma.

Comment author: NancyLebovitz 17 September 2017 07:32:02PM 1 point [-]

How about the boring simplicity of having downvote limits? Maybe something around one downvote/24 hours-- not cumulative.

If you're feeling generous, maybe add a downvote/24 hours per 1000 karma, with a maximum or 5 downvotes/24 hours.

Comment author: richardbatty 17 September 2017 06:55:47PM 6 points [-]

You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.

On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

Comment author: NancyLebovitz 17 September 2017 07:27:56PM 3 points [-]

My impression is that you don't understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there's a venue that suits them-- the venue is necessary, but stays empty unless the desire comes into play.

" I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned."

Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?

Comment author: richardbatty 16 September 2017 12:07:41PM *  15 points [-]

Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.

You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:

  • My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
    • Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don't think calling the moderation group "sunshine regiment" is a good idea for this reason.
    • Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
    • Encourage writers to do literature reviews to connect to existing work in relevant fields.
  • It could also help to:
    • Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don't want to force a particular methodology, it would be good to nudge people in an empirical direction.
    • Encourage content that's directly relevant to people doing important work, rather than mainly being abstract stuff.
Comment author: NancyLebovitz 16 September 2017 02:37:09PM 7 points [-]

It seems to me that you want to squeeze a lot of the fun out of the site.

I'm not sure how far it would be consistent with having a single focus for rationality online, but perhaps there should be a section or a nearby site for more dignified discussion.

I think the people you want to attract are likely to be busy, and not necessarily interested in interviews and testing for a rather hypothetical project, but I could be wrong.

Comment author: Elo 16 September 2017 07:50:13AM 1 point [-]

Yes it will probably cause people to devalue the site. If you pay a dollar it will tend to "feel like" the entire endeavour is worth a dollar.

Comment author: NancyLebovitz 16 September 2017 02:29:35PM 0 points [-]

Metafilter has continued to be a pretty good site even though it requires a small fee to join. There's also a requirement to post a few comments (you can comment for free but need to be a member to do top level posts) and wait a week after sending in money. And it's actively moderated.

http://www.metafilter.com/about.mefi

View more: Next