LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
I think I agree with that and didn't think of my post as claiming otherwise?
I might retract the exact phrasing of my reply comment.
I think I was originally using overwhelmingly basically the way you're using "galaxy brained", and I feel like I have quibbles about the exact semantics of that phrase that feel about as substantial as your concern about overwhelming. (i.e. there is also a substantive difference between a very powerful brain hosted in a datacenter on Earth, and an AI that with a galaxy of resources)
What I mean by "overwhelmingly superintelligent" is "so fucking smart that humanity would have to have qualitatively changed in a similar orders-of-magnitude degree", which probably in practice means humans also have to have augmented their own intelligence, or have escalated their AI control schemes pretty far, carefully wielding significantly-[but-not-overwhelming/galaxy-brained]-AI that oversees all of Earth's security and is either aligned or the humans are really at threading the needle on control for quite powerful systems.
I think that's consistent with what Buck just said. (I interpreted him to be using superintelligent AI here to mean "near human level", and that those AIs would be able to develop successor galaxy-brain AI if they had enough resources, but, if you have sufficiently controlled them, they hopefully won't)
I think (unconfidently guessing) that Eliezer is more bullish than you on "they can do this with pretty limited resources", and this leads to him caring less about the distinction between "weakly superhuman" and "overwhelmingly superhuman".
I don't love "overwhelmingly superintelligent" because AIs don't necessarily have to be qualitatively smarter than humanity to overwhelm it
I think this more feature-than-bug – the problem is that it's overwhelming. There are multiple ways to be overwhelming, what we want to avoid is a situation where an overwhelming, unfriendly AI exists. One way is not build AI of a given power level. The other is to increase the robustness of civilization. (I agree the term is fuzzy, but I think realistically the territory is fuzzy).
I think it's a mistake to just mention that second thing as a parenthetical. There's a huge difference between AIs that are already galaxy-brained superintelligences and AIs that could quickly build galaxy-brained superintelligences or modify themselves into galaxy-brained superintelligences—we should try to prevent the former category of AIs from building galaxy-brained superintelligences in ways we don't approve of.
(did you mean "latter category?")
Were you suggesting something other than "remove the parentheses?" Or did it seem like I was thinking about it in a confused way? Not sure which direction you thought the mistake was in.
(I think "already overwhelmingly strong" and "a short hop away from being overwhelming strong" are both real worrisome. The latter somewhat less worrisome, although I'd really prefer not building either until we are much more confident about alignment/intepretability)
(I think at least part of what's going on is that there is a separate common belief that Superintelligent (compared to the single best humans) is enough to bootstrap to Overwhelming Superintelligence, and some of the MIRI vs Redwood debates are about how necessarily true that is)
I think it's better to say words that mean particular things than trying to fight a treadmill of super/superduper/hyper/etc
Partly because I don't think a Superintelligence by that definition is actually, intrinsically, that threatening. I think it is totally possible to build That without everyone dying.
The "It" that is not possible to build without everyone dying is an intelligence that is either overwhelmingly smarter than all humanity, or, a moderate non-superintelligence that is situationally aware with the element of surprise such that it can maneuver to become overwhelmingly smarter than humanity.
I think meanwhile there are good reasons for people to want to talk about various flavors of weak superintelligence, and trying to force them to use some other word for that seems doomed.
Yeah I don't super stand by the Neanderthal comment, was just grabbing an illustrative example.
I just did a heavy-thinking GPT-5 search, which said "we don't know for sure, there's some evidence that, on an individually they may have been comparably smart as us, but, we seem to have had the ability to acquire and share innovations." This might not be a direct intelligence thing, but, "having some infrastructure that makes you collectively smarter as a group" still counts for my purposes.
Ah, well I agree with that as the dominant failure mode but I think we are on the harder level of "there are opaque gestalt Be A Good Guy For Actualz that are hard to distill into rules, and, you need to successfully sus out 'are they actually a good guy?' without actually instead asking 'do they vibe like a good guy?'"
(I don't know that I believe that exactly for this context, but, I think that's at least often a situation I find myself in)