Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

Sorted by
Raemon42

Oh to be clear I don’t think it was bad for you to post this as-is. Just that I’d like to see more followup

Raemon148

This post seems important-if-right. I get a vibe from it of aiming to persuade more than explain, and I'd be interested in multiple people gathering/presenting evidence about this, preferably at least some of them who are (currently) actively worried about China.

Raemon64

I've recently made a pull-request (not quite ready to merge yet) that gives LessWrong Fatebook hoverovers (which are different from embeds. I'm considering also making embeds, although I think the UI takes up a bit too much space by default).

I am into "more Fatebook integration everywhere".

(I think individual FB questions can toggle whether to show/hide predictions before you've made your own)

Raemon73

This seems right to me, but the discussion of "scaling will plateau" feels like it usually comes bundled with "and the default expectation is that this means LLM-centric-AI will plateau", which seems like the wrong-belief-to-have, to me.

Raemon50

Noting, this doesn't really engage with any of the particular other claims in the previous comment's link, just makes a general assertion. 

Raemon60

Curated. This was one of the more inspiring things I read this year (in a year that had a moderate number of inspiring things!)

I really like how Sarah lays out the problem and desiderata for neutrality in our public/civic institutional spaces.

LessWrong's strength is being a fairly opinionated ”university[1]” about how to do epistemics, which the rest of the world isn't necessarily bought into. Trying to make LW a civic institution would fail. But, this post has me more excited to revisit "what would be necessary to build good, civic infrastructure" (where "good" requires both "be 'good' in some kind of deep sense," but also "be memetically fit enough to compete with Twitter et all." One solution might be convincing Musk of specific policies rather than building a competitor)

  1. ^

    I.e. A gated community with epistemic standards, a process for teaching people, and a process for some of those people going on to do more research. 

Raemon20

You can make a post or shortform discussing it and see what people think. I recommend front loading the main arguments, evidence or takeaways so people can easily get a sense of it - people often bounce off long worldview posts from newcomers

Raemon20

Fwiw I didn't find the post hostile. 

Raemon40

I'm assuming "natural abstraction" is also a scalar property. Reading this paragraph, I refactored the concept in my mind to "some abstractions tend to be cheaper to abstract than others. agents will converge to using cheaper abstractions. Many cheapness properties generalize reasonably well across agents/observation-systems/environments, but, all of those could in theory come apart."

And the Strong NAH would be "cheap-to-abstract-ness will be very punctuated, or something" (i.e. you might expect less of a smooth gradient of cheapnesses across abstractions)

Raemon30

How would you solve the example legal situation you gave?

Load More