As of right now, I expect we have at least a decade, perhaps two, until we get a human intelligence level generalizing AI (which is what I consider AGI). This is a controversial statement in these social circles, and I don't have the bandwidth or resources to write a concrete and detailed argument, so I'll simply state an overview here.
Scale is the key variable driving progress to AGI. Human ingenuity is irrelevant. Lots of people believe they know the one last piece of the puzzle to get AGI, but I increasingly expect the missing pieces to be too alien for most researchers to stumble upon just by thinking about things without doing compute-intensive experiments.
Scale shall increasingly require more and larger datacenters and a lot of power. Humanity's track record at accomplishing megaprojects is abyssmal. If we find ourselves needing to build city-sized datacenters (with all the required infrastructure to maintain and supply it), I expect that humanity will take twice the initially estimated time and resources to build something with 80% of the planned capacity.
So the main questions for me, given my current model, are these:
Both questions are very hard to answer with rigor I'd consider adequate given their importance. If you did press me to answer, however: my intuition is that we'd need at least three OOMs and that the OOM-increase difficulty would be exponential, which I approximate via a doubling of time taken. Given that Epoch's historical trends imply that it takes two years for one OOM, I'd expect that we roughly have at least 2 + 4 + 8 = 14 years more before the labs stumble upon a proto-Clippy.
IDK how to understand your comment as referring to mine.
I'm familiar with how Eliezer uses the term. I was more pointing to the move of saying something like "You are [slipping sideways out of reality], and this is bad! Stop it!" I don't think this usually results in the person, especially confused people, reflecting and trying to be more skilled at epistemology and communication.
In fact, there's a loopy thing here where you expect someone who is 'slipping sideways out of reality' to caveat their communications with an explicit disclaimer that admits that they are doing so. It seems very unlikely to me that we'll see such behavior. Either the person has confusion and uncertainty and is usually trying to honestly communicate their uncertainty (which is different from 'slipping sideways'), or the person would disagree that they are 'slipping sideways' and claim (implicitly and explicitly) that what they are doing is tractable / matters.
I think James was implicitly tracking the fact that takeoff speeds are a feature of reality and not something people can choose. I agree that he could have made it clearer, but I think he's made it clear enough given the following line:
I suspect that even if we have a bunch of good agent foundations research getting done, the result is that we just blast ahead with methods that are many times easier because they lean on slow takeoff, and if takeoff is slow we’re probably fine if it’s fast we die.
And as for your last sentence:
If you don’t, you’re spraying your [slipping sideways out of reality] on everyone else.
It depends on the intended audience of your communication. James here very likely implicitly modeled his audience as people who'd comprehend what he was pointing at without having to explicitly say the caveats you list.
I'd prefer you ask why people think the way they do instead of ranting to them about 'moral obligations' and insinuating that they are 'slipping sideways out of reality'.
Seems like most people believe (implicitly or explicitly) that empirical research is the only feasible path forward to building a somewhat aligned generally intelligent AI scientist. This is an underspecified claim, and given certain fully-specified instances of it, I'd agree.
But this belief leads to the following reasoning: (1) if we don't eat all this free energy in the form of researchers+compute+funding, someone else will; (2) other people are clearly less trustworthy compared to us (Anthropic, in this hypothetical); (3) let's do whatever it takes to maintain our lead and prevent other labs from gaining power, while using whatever resources we have to also do alignment research, preferably in ways that also help us maintain or strengthen our lead in this race.
If you meet Buddha on the road...
I recommend messaging people who seem to have experience doing so, and requesting to get on a call with them. I haven't found any useful online content related to this, and everything I've learned in relation to social skills and working with neurodivergent people, I learned by failing and debugging my failures.
I hope you've at least throttled them or IP blocked them temporarily for being annoying. It is not that difficult to scrape a website while respecting its bandwidth and CPU limitations.
I searched for it and found none. The twitter conversation also seems to imply that there has not been a paper / technical report out yet.
What? Michael Vassar has (AFAIK from Zack M. Davis' descriptions) not taken drugs or promoted becoming a drug addict or "killing yourself". If you hear his Spencer interview, you'll notice that he seems very sane and erudite, and clearly does not give off the unhinged 'Nick Land' vibe that you seem to be claiming that he has or he promotes.
You are directly contributing to the increase of misinformation and FUD here, by making such claims without enough confidence or knowledge of the situation.