Some quotes & few personal opinions:
FT reports
Musk is also in discussions with a number of investors in SpaceX and Tesla about putting money into his new venture, said a person with direct knowledge of the talks. “A bunch of people are investing in it . . . it’s real and they are excited about it,” the person said.
...
Musk recently changed the name of Twitter to X Corp in company filings, as part of his plans to create an “everything app” under the brand “X”. For the new project, Musk has secured thousands of high-powered GPU processors from Nvidia, said people with knowledge of the move.
...
During a Twitter Spaces interview this week, Musk was asked about a Business Insider report that Twitter had bought as many as 10,000 Nvidia GPUs, “It seems like everyone and their dog is buying GPUs at this point,” Musk said. “Twitter and Tesla are certainly buying GPUs.” People familiar with Musk’s thinking say his new AI venture is separate from his other companies, though it could use Twitter content as data to train its language model and tap Tesla for computing resources.
According to xAI website, the initial team is composed of
Elon Musk
and they are "advised by Dan Hendrycks, who currently serves as the director of the Center for AI Safety."
According to reports xAI will seek to create a "maximally curious" AI, and this also seems to be the main new idea how to solve safety, with Musk explaining: "If it tried to understand the true nature of the universe, that's actually the best thing that I can come up with from an AI safety standpoint," ... "I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity."
My personal comments:
Sorry, but at face value, this just does not seem a great plan from safety perspective. Similarly to Elon Musk's previous big bet how to make us safe by making AI open-source and widely distributed ("giving everyone access to new ideas").
Sorry, but given "Center for AI Safety" moves to put them into some sort of "Center", public representative position of AI Safety - including the name choice, and organizing the widely reported Statement on AI risk - it seems publicly associating their brand with xAI is a strange choice.
Is Musk just way less intelligent than I thought? He still seems to have no clue at all about the actual safety problem. Anyone thinking clearly should figure out that this is a horrible idea within at most 5 minutes of thinking.
Obviously pure curiosity is a horrible objective to give to a superAI. "Curiosity" as currently defined in the RL literature is really something more like "novelty-seeking", and in the limit this will cause the AI to keep rearranging the universe into configurations it hasn't seen before, as fast as it possibly can...
Novelty is important. Going towards states you have not seen before is important. This will be a part of the new system, that's for sure.
But this team is under no obligation to follow whatever current consensus might be (if there is a consensus). Whatever is the state of the field, it can't claim a monopoly on how words "curiosity" or "novelty" are interpreted, what are the good ways to maximize them... How one constrains going through a subset of all those novel states by aesthetics, by the need to take time and enjoy ("exploit") those new states, and by ... (read more)