Ruby

LessWrong Team

 

I have signed no contracts or agreements whose existence I cannot mention.

Sequences

LW Team Updates & Announcements
Novum Organum

Comments

Sorted by
Ruby20

I'm curious for examples, feel free to DM if you don't want to draw further attention to them

Ruby60

Thread for feedback on the New Feed

Question, complaints, confusions, bug reports, feature requests, and long philosophical screeds – here is the place!

Ruby30

I think that intellectual intimacy should include having similar mental capacities.

Seems right, for both reasons of understanding and trust.

A part of me wants to argue that these are intertwined

I think the default is they're intertwined but the interesting thing is they can come apart: for example, you develop feelings of connection and intimacy through shared experience, falsely assume you can trust (or shared values or whatever), but then it turns out the experiences shared never actually filtered for that.

Ruby141

This matches with the dual: mania. All plans, even terrible ones, seem like they'll succeed and this has flow through effects to elevated mood, hyperactivity, etc.

Whether or not this happens in all minds, the fact that people can alternate fairly rapidly between depression and mania with minimal trigger suggests there can be some kind of fragile "chemical balance" or something that's easily upset. It's possible that's just in mood disorders and more stable minds are just vulnerable to the "too many negative updates at once" thing without greater instability.

Ruby20

To clarify here, I think what Habryka says about LW generally promoting lots of content being normal is overwhelmingly true (e.g. spotlights and curation) and this is book is completely typical of what we'd promote to attention, i.e. high quality writing and reasoning. I might say promotion is equivalent to upvote, not to agree-vote.

I still think there details in the promotion here that I think make inferring LW agreement and endorsement reasonable:

  1. lack of disclaimers around disagreement (absence is evidence) together with a good prior that LW team agrees a lot with Eliezer/Nate view on AI risk
  2. promoting during pre-order (which I do find surprising)
  3. that we promoted this in a new way (I don't think this is as strong evidence as we did before, mostly it's that we've only recently started doing this for events and this is the first book to come along, we might have and will do it for others). But maybe we wouldn't have or as high-effort absent agreement.

But responding to the OP, rather than motivation coming from narrow endorsement of thesis, I think a bunch of the motivation flows more from a willingness/desire to promote Eliezer[1] content, as (i) such content is reliably very good, and (ii) Eliezer founded LW and his writings make up the core writings that define so much of site culture and norms. We'd likely do the same for another major contributor, e.g. Scott Alexander.

I updated from when I first commented thinking about what we'd do if Eliezer wrote something we felt less agreement over, and I think we'd do much the same. My current assessment is the book placements is something like ~"80-95%" neutral promotion of high-quality content the way we generally do, not because of endorsement, but maybe there's a 5-20% it got extra effort/prioritization because we in fact endorse the message, but hard to say for sure.

 

  1. ^

    and Nate

Ruby30

LW2 had to narrow down in scope under the pressure of ever-shorter AI timelines

I wouldn't say the scope was narrowed, in fact the admin team took a lot of actions to preserve the scope, but a lot of people have shown up for AI or are now heavily interested in AI, simply making that the dominant topic. But, I like to think that people don't think of LW as merely an "AI website".

Ruby22

It really does look dope

Ruby*110

Curated. The idea of using Futarchy and prediction markets to make decision markets was among the earliest ideas I recall learning when I found the LessWrong/Rationality cluster in 2012 (and they continue to feature in dath ilani fiction). It's valuable then to have an explainer for fundamental challenges with prediction markets. I suggest looking at the comments and references, as there's some debate here, but overall I'm glad to have this key topic explored critically.

Ruby43

Fwiw, it feels to me like we're endorsing the message of the book with this placement. Changing the theme is much stronger than just a spotlight or curation, not to the mention that it's pre-order promotion.

Ruby81

Curated. Simple straightforward explanations of notable concepts is among my favorite genre of posts. Just a really great service when a person, confused about something, goes on a quest to figure it out and then shares the result with others. Given how misleading the title of the theorem is, it's valuable here to have it clarified. Something that is surprising, is given what this theorem actual says and how limited it is, that it's the basic of much other work given what it purportedly states, but perhaps people are assuming that the spirit of it is valid and it's saved by modifications that e.g. John Wentworth provides. It'd be neat to see more of analysis of that. It'd be sad if a lot of work cites this theorem because people believed the claim of the title without checking the proof really supports it. All in all, kudos for making progress on all this.

This may be the most misleading title and summary I have ever seen on a math paper. If by “making a model” one means the sort of thing people usually do when model-making - i.e. reconstruct a system’s variables/parameters/structure from some information about them - then Conant & Ashby’s claim is simply false. - John Wentworh

Load More