> Eliezer Yudkowsky and Nate Soares have written a new book. Should we take it seriously? > > I am not the most qualified person to answer this question. If Anyone Builds It, Everyone Dies was not written for me. It’s addressed to the sane and happy majority who haven’t...
Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout. Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track...
Vaniver I'm going to expand on something brought up in this comment. I wrote: > A lot of my thinking over the last few months has shifted from "how do we get some sort of AI pause in place?" to "how do we win the peace?". That is, you could...
This idea is half-baked; it has some nice properties but doesn't seem to me like a solution to the problem I most care about. I'm publishing it because maybe it points someone else towards a full solution, or solves a problem they care about, and out of a general sense...
Announcement, Policy v1.0, evhub's argument in favor on LW. These are my personal thoughts; in the interest of full disclosure, one of my housemates and several of my friends work at Anthropic; my spouse and I hold OpenAI units (but are financially secure without them). This post has three main...
Elizabeth I've been on this big kick talking about truthseeking in effective altruism. I started with vegan advocacy because it was the most legible, but now need to move on to the deeper problems. Unfortunately those problems are still not that legible, and I end up having to justify a...
I wrote: > I wish that people would generally get into more conflicts over the values and principles that they hold Vaniver wrote: > I think we are unusually fractious and don't value peace treaties and fences as much as we should I wrote: > Fight me :) Start of...