Point addressed with unnecessarily polemic tone:
It is alright to consider it. I find it implausible that a wide range of accomplished researchers lay out arguments, collect data, interpret what has and hasn't been observed and come to the conclusion that our current trajectory of AI development poses a significant amount of existential risk, which can potentially manifest in short timelines, because a majority of them has a childhood trauma that blurs their epistemology on this particular issue but not on others where success criteria could already be observed.
I'm close to getting a postverbal trauma from having to observe all the mental gymnastics around the question of whether building a superintelligence without having reliable methods to shape its behavior is actually dangerous. Yes, it is. No, that fact does not depend on whether Hinton, Bengio, Russell, Omohundro, Bostrom, Yudkowsky, et al. were held as a baby.
Further context about the "recent advancements in the AI sector have resolved this issue" paragraph:
I assume they can't make a statement and that their choice of next occupation will be the clearest signal they can and will send out to the public.
He has a stance towards risk that is a necessary condition for becoming the CEO of a company like OpenAI, but doesn't give you a high probability of building a safe ASI:
If everyone has his own asteroid impact, earth will not be displaced because the impulse vectors will cancel each other out on average*. This is important because it will keep the trajectory equilibrium of earth, which we know since ages from animals jumping up and down all the time around the globe in their games of survival. If only a few central players get asteroid impacts it's actually less safe! Safety advocates might actually cause the very outcomes that they fear!
*I've a degree in quantum physics and can derive everything from my model of the universe. This includes moral and political imperatives that physics dictate and thus most physicists advocate for.
We are decades if not centuries away from developing true asteroid impacts.
Given all the potential benefits there is no way we are not going to redirect asteroids to earth. Everybody will have an abundance of rare elements.
xlr8
"it's psychologically appealing to have a hypothesis that means you don't have to do any mundane work"
I don't doubt that something like inverse bike-shedding can be a driving force for some individuals to focus on the field of AI safety. I highly doubt it is explanatory for the field and the associated risk predictions to exist in the first place, or that its validity should be questioned on such grounds, but this seems to happen in the article if I'm not entirely misreading it. From my point of view, there is already an overemphasis on psychological factors in the broader debate and it would be desirable to get back to the object level, be it with theoretical or empirical research, which both have their value. This latter aspect seems to lead to a partial agreement here, even though there's more than one path to arrive at it.