This from June lists a lot of people who have read it, including Stephen Fry, Grimes, professors etc. Seperately on Twitter seemingly anyone who was someone in the scene had given their opinion after having read it.
Any thread from the first announcement onward had people saying they've read it already. From the same thread (and that was early on)
Many people (like >100 is my guess), with many different view points, have read the book and offered comments.
Note that IFP (a DC-based think tank) recently had someone deliver 535 copies of their new book to every US Congressional office.
More endorsements and there's also a lot of twitter personalities that had mentioned reading it, which I wont hunt. It definitely felt like a lot more than 50. I'm not arguing it's a bad or good strategy, just that it's felt a bit off to wait for months for a 'pre-order' when anyone who I might see on Twitter and would've been interested to have read it already has.
While I mostly did it out of support and reducing x-risk, pre-ordering “If Anyone Builds It, Everyone Dies” has been one of the more frustrating book order experiences I've had. The main purpose of the order looks to be successful enough and the actual book experience doesn't matter all that much but still:
I pre-ordered in mid May as soon as I heard about it, and since then it's been months of nearly everyone on the Internet having already read it, then later pre-order prices (barely relevant) were lowered which seems a bit backwards, and now that it's been 'out', I still don't have the book (or even estimated shipping - from Amazon, Germany) while everyone else who hadn't posted about it has now been posting reviews etc.
This is kind of annoying, as I'm not reading any of the commentary now - reading the book firsthand when I've already pre-ordered it would seem to make more sense, but by the time I even get it, It'd be far after most of the initial conversation happened so at this point I'm having a worse experience for having pre-ordered it.
Again, that experience is not that important, I've benefited a lot from Eliezer's other writting before etc. but it's disappointing enough to vent in at least one comment before taking the L and moving on.
While I believe SC2 and Dota would fail today with sufficient effort, the models didn't quite perform superhuman, and as far as I am aware no community bots do either.
One of the reasons why it's plausible that today's or tomorrow's LLMs can result in brief simulations of consciousness or even qualia is that it happens with dreams in humans. Dreams are likely some sort of processing of information/compression/garbage collection, yet they still result in (badly) simulated experiences as a clear side-effect of trying to work with human experience data.
I still want something even closer to Givewell but for AI Safety (though it is easier to find where to donate now than before). Hell, I wouldn't mind if LW itself had recommended charities in a prominent place (though I guess LW now mostly asks for Lightcone donations instead).
Thanks for sharing this. Based on the About page, my 'vote' as a EU citizen working in an ML/AI position could conceivably count for a little more, so it seems worth doing it. I'll put it in my backlog and aim to get to it on time (it does seem like a lengthy task).
If you don't know who to believe then falling back on prediction markets or at least expert consensus is not the worst strategy.
Do you truly not believe that for your own ljfe - to use the examples there - solving aging, curing all disease, solving energy isn't even more valuable? To you? Perhaps you don't believe those possible but then that's where the whole disagreement lies.
And if you are talking about Superintelligent AGI and automation why even talk about output per person? I thought you at least believe people are automated out and thus decoupled?
Does he not believe in AGI and Superintelligence at all? Why not just say that?
AI could cure all diseases and “solve energy”. He mentions “radical abundance” as a possibility as well, but beyond the R&D channel
This is clearly about Superintelligence and the mechanism through which it will happen in that scenario is straightforward and often talked about. And if he disagrees he either doesn't believe in AGI (or at least advanced AGI) or believes that solving energy, curing disease is not that valuable? Or he is purposefully talking about a pre-AGI scenario while arguing against post-AGI views?
to lead to an increase in productivity and output *per person*
This quote certainly suggests this. It's just hard to tell if this is due to bad reasoning or on purpose to promote his start-up.
A somewhat related thing that I do is to read/watch stories with clever/intelligent/rational (or whatever I want to be) characters such as Death Note/HPMOR (I have a bunch of other examples) which both seems to prime me to think a bit like them (or to enter in a mode where I think that my narrative is similar to theirs) and also gives me role models on which I can fall back to in some situations (like in your Naruto example). This has definitely at least partially worked (might be placebo) as I almost always have more motivation on which I act to study/do productive things after watching/reading such a story.