Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Just a photo
Comment author: scarcegreengrass 29 October 2017 01:22:12AM 1 point [-]

I first see the stems, then i see the leaves.

I think humans spend a lot of time looking at our models of the world (maps) and not that much time looking at our actual sensory input.

Comment author: richardbatty 16 September 2017 12:07:41PM *  15 points [-]

Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.

You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:

  • My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
    • Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don't think calling the moderation group "sunshine regiment" is a good idea for this reason.
    • Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
    • Encourage writers to do literature reviews to connect to existing work in relevant fields.
  • It could also help to:
    • Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don't want to force a particular methodology, it would be good to nudge people in an empirical direction.
    • Encourage content that's directly relevant to people doing important work, rather than mainly being abstract stuff.
Comment author: scarcegreengrass 19 September 2017 04:05:08PM 1 point [-]

This is a real dynamic that is worth attention. I particularly agree with removing HPMoR from the top of the front page.

Counterpoint: The serious/academic niche can also be filled by external sites, like https://agentfoundations.org/ and http://effective-altruism.com/.

Comment author: pepe_prime 13 September 2017 01:20:21PM 10 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: scarcegreengrass 19 September 2017 03:59:44PM 8 points [-]

I took the survey. It's probably my favorite survey of each year :) Thanks.

Comment author: John_Maxwell_IV 10 September 2017 07:49:54AM *  7 points [-]

Note that neither Lumifer, nor Dagon, nor Brillyant have ever made a top-level submission of original content to Less Wrong. It's easy to be a critic.

Since Lumifer, Dagon, and Brillyant seem to want a site that never has anything new on it, may I suggest example.com? It hardly ever changes.

...what did people say they'd need to rejoin [Less Wrong]?

Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.

Source. Less Wrongers overwhelmingly want there to be more posts.

The problem with comments like Lumifer's is not that they are incorrect. It's that they create a bad incentive structure for content creators. Anyone who posts to LW is doing free labor in an attempt to improve the accuracy of the community's beliefs. I believe that lukeprog, Eliezer, and Yvain have all complained that writing LW posts is not very rewarding. If there's some probability that the Lumifers are the world are going to call your post "stupid" without offering any specific feedback, that makes the job even more thankless. And no, this is not necessarily something a person can predict in advance: a previous post chaosmage made got voted to +55, and the ideas in it were being used by a friend of mine years after it was made.

The cost of an occasional bad post is not very high: you read it until you realize it is bad and then you move on. But the cost of nasty comments like Lumifer's can be quite high. Most online communities suck, and nasty comments are a big part of the reason why. If I was selling a product you could spray on an online community to prevent anything from growing there, I would name it Lumifer.

Comment author: scarcegreengrass 12 September 2017 05:37:03PM 0 points [-]

I agree although i do not dislike Lumifer's comments in general, just the overly negative ones.

Comment author: ChristianKl 09 September 2017 08:09:17AM 3 points [-]

Dry-cleaning feels like a service that would be improved if you don't have to bring your clothes to the dry-cleaner but just give them to a self-driving vehicle that comes to your place.

Comment author: scarcegreengrass 12 September 2017 05:29:07PM 0 points [-]

Something like this sounds plausible ... or at least, it's very similar to existing pickup laundry companies.

Comment author: Elo 10 September 2017 09:45:07PM 1 point [-]

You try writing things down in a car. Then try 3d printing while dealing with acceleration and bumps

Comment author: scarcegreengrass 12 September 2017 05:26:30PM 0 points [-]

Maybe it only works in places with very straight freeways, like deserts :P

Comment author: ChristianKl 24 August 2017 04:04:12PM 1 point [-]

Both the chance that the the supervulcano erupts within the next 100 years and the chance that it's more likely to errupt with this project in place are known unknowns.

Why do you assume the danger from unknown unknowns is large?

Comment author: scarcegreengrass 29 August 2017 05:32:52PM 0 points [-]

Just because the magnitude of the bad outcome is enormous. Caution seems prudent for such a slow, dangerous process.

In response to Like-Minded Forums
Comment author: scarcegreengrass 29 August 2017 05:28:01PM 0 points [-]

https://www.metaculus.com is a interesting community that makes predictions about future events. The stakes are points, not money.

Comment author: scarcegreengrass 24 August 2017 03:45:18PM 0 points [-]

Here is a variant idea (not sure if feasible): Set up a organization with the mission statement of building this geothermal plant by 2150 CE. It can start very low-staff, invest some money, and invest in relevant research. Then, after spending 100 or so years investigating the risks, it can start digging.

Motivation: We can spare 100 years when it comes to geology. We could approach this with next century's science and technology.

Comment author: scarcegreengrass 24 August 2017 03:46:32PM 0 points [-]

This is assuming the danger from unknown unknowns is large. Speaking as a non-expert, i would guess that it is.

Comment author: scarcegreengrass 24 August 2017 03:45:18PM 0 points [-]

Here is a variant idea (not sure if feasible): Set up a organization with the mission statement of building this geothermal plant by 2150 CE. It can start very low-staff, invest some money, and invest in relevant research. Then, after spending 100 or so years investigating the risks, it can start digging.

Motivation: We can spare 100 years when it comes to geology. We could approach this with next century's science and technology.

View more: Next