Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
I currently think this is putting too much weight on a single paragraph in Will's review. The paragraph is:
"All over the Earth, it must become illegal for AI companies to charge ahead in developing artificial intelligence as they’ve been doing."
The positive proposal is extremely unlikely to happen, could be actively harmful if implemented poorly (e.g. stopping the frontrunners gives more time for laggards to catch up, leading to more players in the race if AI development ends up resuming before alignment is solved), and distracts from the suite of concrete technical and governance agendas that we could be implementing.
I agree that what Will is saying literally here is that "making it illegal for AI companies to charge ahead as they've been doing is extremely unlikely to happen, and probably counterproductive". I think this is indeed a wrong statement that implies a kind of crazy worldview. I also think it's very unlikely what Will meant to say.
I think what Will meant to say is something like "the proposal in the book, which I read as trying to ban AGI development, right now, globally, using relatively crude tools like banning anyone from having more than 8 GPUs, is extremely unlikely to happen and the kind of thing that could easily backfire".
I think the latter is a much more reasonable position, and I think does not imply most of the things you say Will must believe in this response. My best guess is that Will is in favor of regulation that allows slowing things down, in favor of compute monitoring, and even in favor of conditional future pauses. The book does talk about them, and I find Will's IMO kind of crazily dismissive engagement with these proposals pretty bad, but I do think you are just leaning far too much on a very literal interpretation of what Will said in a way that I think is unproductive.
(I dislike Will's review for a bunch of other reasons, which includes his implicit mischaracterization of the policies proposed in the book, but my response would look very different than this post)
We would like now to be called “A Center for Applied Rationality,” not “the Center for Applied Rationality.” Because we’d like to be visibly not trying to be the one canonical locus.
FWIW, I tried this for a bit and failed. Saying "a Center for Applied Rationality" just sounds nonsensical and every time I have considered using it in conversation I predicted that I would just get weird blank stares.
I am planning to continue calling it "the Center for Applied Rationality" as a result (and also am kind of annoyed about what reads to me as basically non-grammatical language on the website and other places, plus a request to non-standard language that I think would be reliably embarrassing when trying to use it in conversation).
My guess is if you want to change the usage here, you'll have to change the name properly.
I think this stuff just takes a while, and things happened to coincide with the collapse of FTX which masked much of the already existing growth (and the collapse of FTX indirectly also resulted in some decrease in other funders withdrawing funds).
I will gladly take bets with people that there will be a lot more money interested in the space in 2 years than there is now.
He definitely works mostly on things he considers safety. I don't think he has done much capability benchmark work recently (though maybe I am wrong, but I figured I would register that the above didn't match my current beliefs).
In addition to Lighthaven for which we have a mortgage, Lightcone owns an adjacent property that is fully unencumbered that's worth around $1.2M. Lighthaven has basically been breaking even, but we still have a funding shortfall of about $1M for our annual interest payment for the last year during which Lighthaven was ramping up utilization. It would be really great if we could somehow take out our real estate equity to cover that one-time funding shortfall.
If you want to have some equity in Berkeley real estate, and/or Lightcone's credit-worthiness, you might want to give Lightcone a loan secured against our $1.2M property. We would pay normal market interest rates on this (~6% at the moment), and if we ever default, you would get the property.
We have some very mediocre offers from banks for a mortgage like this (interest rates of around 11% and only cashing out like $600k on the property). Banks really don't like lending to nonprofits, who tend to have kind of unstable income streams. I think there is a quite decent chance that it would make more economic sense for someone who has more reason to think that we won't be a giant pain to collect from to do this instead (given that from the perspective of the bank we are hard to distinguish from other nonprofits, but we are easy to distinguish from the perspective of most readers of this).
To be clear, by my lights most lenders are probably better served making some AI-related investments, which I expect will have higher risk-adjusted returns, but this could be a good bet as part of a portfolio, or for someone who doesn't want to make AI-related bets for ethical reasons.
If you're interested, or know anyone who might, feel free to DM me, or comment here, or send me an email at habryka@lesswrong.com.
Everything has a cost. Inconvenience, taste, enjoyment, economic impacts. The argument that for some reason in the domain of animal welfare we should stop doing triage and just do everything has been discussed a lot, and responded to a lot.
See also Self-Integrity and the Drowning Child.
Almost no one I know who wasn't working directly with MIRI on the book launch had read it, so it certainly didn't feel that way for me!
Many people (like >100 is my guess)
Around 100 seems vaguely right to me (if you count people working on the launch), though this quote was still an update for me!
I pre-ordered in mid May as soon as I heard about it, and since then it's been months of nearly everyone on the Internet having already read it
Wait what? I don't think almost anyone got to read it before it came out. My model is maybe a total of like 50 pre-order copies were sent out. Maybe 100? Definitely not anything close to "nearly everyone on the internet".
Yep!
Integrity and accountability are core parts of rationality