Sam Altman is almost certainly aware of the arguments and just doesn't agree with them. The OpenAI emails are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.
Elon Musk to Sam Teller - Apr 27, 2016 12:24 PM
History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.
The recent example of Microsoft's AI chatbot shows how quickly it can turn incredibly negative. The wise course of action is to approach the advent of AI with caution and ensure that its power is widely distributed and not controlled by any one company or person.
That is why we created OpenAI.
They also had a specific AI safety team relatively early on, and mention explicitly the reasons in these emails:
- Put increasing effort into the safety/control problem, rather than the fig leaf you've noted in other institutions. It doesn't matter who wins if everyone dies. Related to this, we need to communicate a "better red than dead" outlook — we're trying to build safe AGI, and we're not willing to destroy the world in a down-to-the-wire race to do so.
They also explicitly reference this Slate Star Codex article, and I think Elon Musk follows Eliezer's twitter.
I don't know, but I would suspect that Sam Altman and other OpenAI staff have strong views in favor of what they're doing. Isn't there probably some existing commentary out there on what he thinks? I haven't checked. But also, it is problematic to assume that science is rational. It isn't, and many people often hold differing views up to and including the time that something becomes incontrovertibly established.
Further, an issue here is when someone has a strong conflict of interest. If a human is being paid millions of dollars per year to pursue their current course of action—buying vacation homes on every continent, living the most luxurious lifestyle imaginable—then it is going to be hard for them to be objective about serious criticisms. This is why I can't just go up to the Exxon Mobil CEO and say, 'Hey! Stop drilling, it's hurting the environment!' and expect them to do anything about it, even though it would be a completely non-controversial statement.
The difference is that if the Exxon Mobil CEO internalizes that (s)he is harming the environment, (s)he has to go and get a completely new job, probably building dams or something. But if Sam Altman internalizes that he is increasing our chance of extinction, all he has to do is tell all his capability researchers to work on alignment, and money is still coming in; only now, less of it comes from ChatGPT subscriptions and more of it comes from grants from the Long-Term Future Fund. It's a much easier and lighter shift. Additionally, he knows that he can go...
When a decent rationalist walks up to Sam Altman, for example, and presents our arguments for AI doom, how does he respond? What stops us from simply walking up to the people in charge of these training runs, explaining to them the concept of AI doom very slowly and carefully while rebutting all their counterarguments, and helping them to all coordinate to stop all their AI capabilities development at the same time, giving our poor AI safety freelancers enough time to stumble and philosophize their way to a solution? What is their rejection of our logic?
Or is it simply that they have a standard rejection for everything?