Ruby

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

Sorted by
Ruby30

Interesting. Doesn't replicate for me. What phone are you using?

Answer by Ruby164

It's a compass rose, thematic with the Map and Territory metaphor for rationality/truthseeking.

The real question is why does NATO have our logo. 

Ruby169

Curated!  I like this post for the object-level interestingness of the cited papers, but also for pulling in some interesting models from elsewhere and generally reminding us that this is something we can do.

In times of yore, LessWrong venerated the the neglected virtue of scholarship. And well, sometimes it feels like it's still neglected. It's tough because indeed many domains have a lot of low quality work, especially outside of hard sciences, but I'd wager on there being a fair amount worth reading, and appreciate Buck point at a domain where that seems to be the case.

Ruby20

Was there the text of the post in the email or just a link to it?

Ruby50

Curated. I was reluctant to curate this post because I found myself bouncing off it some due to length – I guess in pedagogy there's a tradeoff between explaining at length (and you lose people) and you convey enough info vs keeping it brief and people read it but they don't get enough. Based on private convo, Raemon thinks length is warranted.

I'm curating because I do think this kind of project is valuable. Everyday it feels easier to lose our minds entirely to AI, and I think it's important to remember we can think better or worse, and we should be trying to do the former. 

I have mixed feeling about Raemon's project overall. Parts of it feel good, something feels missing (I think I'm partial to John Wentworth's claim elsewhere that you need a bunch of technical study in the recipe), but I except the stuff Raemon is developing to be helpful to have engaged with for anyone who gets better at thinking.

Ruby224

This doesn't seem right. Suppose there are two main candidates for how to get there, I-5 and J-6 (but who knows, maybe we'll be surprised by a K-7) and I don't know which Alice will choose. Suppose I know there's already a Very General Helper and Kinda Decent Generalizer, then I might say "I assign 65% chance that Alice is going to choose the I-5 and will try to contribute having conditioned on that". This seems like a reasonable thing to do. It might be for naught, but I'd guess in many case the EV of something definitely helpful if we go down Route A is better than the EV of finding something that's helpful no matter the choice.

One should definitely track the major route they're betting on and make updates and maybe switch, but seems okay to say your plan is conditioning on some bigger plan. 

Ruby*1311

Edit: we are not going to technically curate this post since it's an EA Forum crosspost and for boring technical reasons that breaks the curation email. I will leave this notice up though.

Curated. This piece definitely got me thinking. If we grant that some people are unusually altruistic, empathetic, etc., it stands to reason that there are others on the other end of various distributions. And then we should also expect various selection effects on where they end up.

It was definitely a puzzle piece clicking for me that these traits can coexist with [genuine] moral conviction and that the traits are egodystonic. This rings true but somehow hasn't been an explicit model for me, but yes. Combine with this the difficult of detecting these traits and resultant behaviors...and yeah, there's stuff here to think about.

I appreciate that the authors were thorough in their research but don't especially love the format. This was pretty dense and I think a post that pulled out the most key pieces of info and argued for some conclusions would be a better read, but I much prefer this to no post.

To the extent I should add my own opinions to curation notices, my thought is this makes me update against "benefit of the doubt" when witnessing concerning behaviors. I don't know that everyone beginning to scrutinize everyone else for having big D vibes would be good, but I do think scrutinizing behaviors for being high-integrity, cooperative, transparent, etc. might actually be a good direction – with the understanding that good norms around acceptable behaviors prevents abuses that anyone (however much D) is tempted towards. Something like we want to build "robust-to-malevolence" orgs and community that make it impractical or too costly to manipulate, etc.

Ruby50

Welcome! Don't be too worried, you can try posting some stuff and see how it's received. Based on how you wrote this comment, I think you won't have much trouble. The New User Guide and other stuff gets worded a bit sternly because of the people who tend not to put in much effort at all and expect to be well received – which doesn't sound like you at all. It's hard hard to write one document that's stern to those who need it and more welcoming to those who need that, unfortunately.

Ruby139

Curated! It strikes me that asking "how would I update in response to...?" is both sensible and straightforward thing to be asking and yet not a form of question I'm seeing. I think we could be asking the same about slow vs fast takeoff, etc. and similar questions.

The value and necessity of this question also isn't just about not waiting for future evidence to come in, but realizing that "negative results" require interpretation too. I also think there's a nice degree of "preregistration" here is well that seems neat and maybe virtuous. Kudos and thank you.

Load More