Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: richardbatty 16 September 2017 12:07:41PM *  15 points [-]

Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.

You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:

  • My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
    • Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don't think calling the moderation group "sunshine regiment" is a good idea for this reason.
    • Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
    • Encourage writers to do literature reviews to connect to existing work in relevant fields.
  • It could also help to:
    • Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don't want to force a particular methodology, it would be good to nudge people in an empirical direction.
    • Encourage content that's directly relevant to people doing important work, rather than mainly being abstract stuff.
Comment author: Nisan 19 September 2017 04:28:16AM 1 point [-]

Regarding a couple of your concrete suggestions: I like the idea of using existing academic jargon where it exists. That way, reading LW would teach me search terms I could use elsewhere or to communicate with non-LW users. (Sometimes, though, it's better to come up with a new term; I like "trigger-action plans" way better than "implementation intentions".)

It would be nice if users did literature reviews occasionally, but I don't think they'll have time to do that often at all.

Comment author: Duncan_Sabien 31 May 2017 06:18:26AM 0 points [-]

Can I get contact info from you? I already have Malcolm's; if there's an email address you can use to send a message to TK17Studios at gmail dot com, I can then offer that address to anyone without an obvious check-in.

Comment author: Nisan 31 May 2017 07:13:56AM 0 points [-]


Comment author: malcolmocean 28 May 2017 08:27:09PM 5 points [-]

I am open to being an outside advisor / buddy / contact etc to individuals within this and/or with the project as a whole.

Comment author: Nisan 31 May 2017 05:36:59AM 2 points [-]

Me too!

Comment author: Qiaochu_Yuan 27 May 2017 10:19:02PM *  0 points [-]

Partially, I'm afraid that if this doesn't go well, our community will lose a cohort of promising people.

I really don't know what you mean by "lose" here (and I'm worried that others will have varying interpretations as well). Do you mean they'll become less promising? Not promising? Leave the community? Go crazy? Die?

Anyway, this seems sensible, but I still want to nudge you and everyone else in the direction of sharing more explicit models of what you think could actually go wrong.

Comment author: Nisan 27 May 2017 11:39:08PM 3 points [-]

Sorry, I was imagining a scenario where a person has an unpleasant experience and then leaves the community because for the last several months all their close contacts in the community were in the context of an unpleasant living situation. That's bad for the person, and unfortunate for the community as well.

Comment author: Duncan_Sabien 27 May 2017 04:51:31PM 0 points [-]

I've come around somewhat to the outside buddy idea below; I dunno about the buddies knowing each other. That seems to introduce a whole new layer of difficulty, unless you're just talking about, like, an email list.

Comment author: Nisan 27 May 2017 08:58:36PM 2 points [-]

Cool. Yes, a mailing list sounds even better than the low-tech solution I had in mind, which was "every buddy learns 80% of the names of the other buddies through the grapevine, and they happen to be one or two hops away on the social network".

Comment author: Qiaochu_Yuan 27 May 2017 06:56:21PM 0 points [-]

Sure, but what I'd like to know is why Nisan thinks that difference is important in this case.

Comment author: Nisan 27 May 2017 08:49:43PM 1 point [-]

I'm not proposing a house policy here. I'm suggesting that a Dragon would do well to have regular followups with someone outside the house, and I'm proposing that some members of the wider community offer to be those someones.

In the past I've had regular video calls with a couple people who were doing long-term experiments with their lifestyle; I think it was helpful. I believe such an arrangement was part of the Leverage polyphasic sleep experiment.

Jacob is right: There's a difference between a friend one can reach out to if one needs to, and a friend one is scheduled to talk to once a week. Personally, I struggle to keep up with friends without scheduled meetings, and it sounds like the Dragon Army will be very busy.

Also, there is a difference between reaching out to a friend when things have gone very wrong and one needs to get out; and bringing up a less drastic problem during a weekly check-in. In the first case, you need a couch to crash on and maybe a lawyer. In the second case, you need someone who will listen to you and bring an outside perspective, and maybe refer you to other resources.

Partially, I'm afraid that if this doesn't go well, our community will lose a cohort of promising people. It would be a shame if that happened because we failed to pay attention to how they were doing.

But also, if the experiment goes very well, this arrangement would be a means by which the wider community can learn from what went right.

Comment author: Nisan 27 May 2017 04:53:44AM 10 points [-]

Are there people external to the project who are going to keep an eye on this? I think it would be sensible for each participant to have a buddy outside the house who checks in with them regularly. And for each buddy to know who the other buddies are.

Comment author: Nisan 25 December 2016 04:48:11AM 3 points [-]

It's a curious refutation. The author says that the people who are concerned about superintelligence are very smart, the top of the industry. They give many counterarguments, most of which can be easily refuted. It's as if they wanted to make people more concerned about superintelligence, while claiming to argue the opposite. And then they link directly to MIRI's donation page.

Comment author: Nisan 11 December 2016 09:43:51PM 2 points [-]

Maybe you'll cover this in a future post, but I'm curious about the outcomes of CFAR's past AI-specific workshops, especially "CFAR for ML Researchers" and the "Workshop on AI Safety Strategy".

Comment author: IlyaShpitser 17 January 2016 05:32:17AM 3 points [-]

Imagine an undirected graph where each node has a left and right neighbor (so it's an infinitely long chain). You are on a node in this graph, and somewhere to the left or right of you is a hotel (50/50 chance to be in either direction). You don't know how far -- k steps for an arbitrarily large k that an adversary picks after learning how you will look for a hotel.

The solution that takes 1 step left, 2 steps right, 3 steps left, etc. will find the hotel in O(k^2) steps. Is it possible to do better?

Comment author: Nisan 17 January 2016 08:10:25AM 4 points [-]

I can get O(k).

View more: Next