xAI has ambitions to compete with OpenAI and DeepMind, but I don't feel like it has the same presence in the AI safety discourse. I don't know anything about its attitude to safety, or how serious a competitor it is. Are there good reasons it doesn't get talked about? Should we be paying it more attention?
I've asked similar questions before and heard a few things. I also have a few personal thoughts that I thought I'd share here unprompted. This topic is pretty relevant for me so I'd be interested in what specific claims in both categories people agree/disagree with.
Things I've heard:
Personal thoughts:
A new Bloomberg article says xAI is building a datacenter in Memphis, planned to become operational by the end of 2025, mentioning a new-to-me detail that the datacenter targets 150 megawatts (more details on DCD). This means the scale of 100,000 GPUs or $4 billion in infrastructure, a bulk of its recently secured $6 billion from Series B.
This should be good for training runs that could be said to cost $1 billion in cost of time (lasting a few months). And Dario Amodei is saying that this is the scale of today, for models that are not yet deployed. This puts xAI at 18 months behind, a difficult place to rebound from unless long-horizon task capable AI that can do many jobs (a commercially crucial threshold that is not quite AGI) is many more years away.
It seems the 100K H100s for the Memphis datacenter can plausibly get online around the end of 2024, and planned release of Grok-3 gives additional indirect evidence this might be the case. While OpenAI might have started training in May on a cluster that might have 100K H100s as well. So I'm updating my previous guess of xAI being 18 months behind to them only being 7-9 months behind for the 100K H100s scale (above 4e26 FLOPs).
For some reason current labs are not running $10 billion training runs already, didn't build the necessary datacenters immediately. It would take a million H100s and 1.5 gigawatts, supply issues seem likely. There is also a lot of engineering detail to iron out, so the scaling proceeds gradually.
But some of this might be risk aversion, unwillingness to waste capital where a slower pace makes a better use of it. As a new contender has no other choice, we'll get to see if it's possible to leapfrog scaling after all. And Musk has affinity with impossible deadlines (not necessarily with meeting them), so the experiment will at least be attempted.
I wonder if anyone has considered or built prediction markets that can pay out repeatedly: an example could be "people who fill in this feedback form will say that they would recommend the event to others", and each response that says yes causes shorts to pay longs (or noes pay yesses) and vice versa.
You'd need some mechanism to cap losses. I guess one way to model it is as a series of markets of the form "the Nth response will say yes", and a convenient interface to trade in the first N markets at a single price. That way, after a few payouts your exposure automatically closes. That said, it might make more sense to close out after a specified number of losses, rather than a specified number of resolutions (i.e. no reason to cap the upside) but it's less clear to me whether that structure has any hidden complexity.
The advantages over a single market that resolves to a percentage of yesses are probably pretty marginal? Most significant where there isn't going to be an obvious end time, but I don't have any examples of that immediately.
In general there's a big space of functions from consequences to payouts. Most of them probably don't make good "products", but maybe more do than are currently explored.
In markets like these, "cap losses" is equivalent to "cap wins" - the actual money is zero-sum, right? There certainly exist wagers that scale ($10 per point difference, on a sporting event, for instance), and a LOT of financial investing has this structure (stocks have no theoretical maximum value).
I think your capping mechanism gets most of the value - maybe not "the Nth response is yes", but markets for a couple different sizes of vote counts, with thresholds for averages. "wins if over 10,000 responses with 65% yes, loses if over 10,000 less than 65% yes, money returned if fewer than 10,000 responses", with a number of wagers allowed with different size limits.
In markets like these, "cap losses" is equivalent to "cap wins" - the actual money is zero-sum, right?
Overall, yes, per-participant no. For example, if everyone caps their loss at $1 I can still win $10 by betting against ten different people, though of course only at most 1 in 11 market participants will be able to do this.
There certainly exist wagers that scale ($10 per point difference, on a sporting event, for instance), and a LOT of financial investing has this structure (stocks have no theoretical maximum value).
Yeah, although the prototypical prediction market has contracts with two possible valuations, even existing prediction markets also support contracts that settle to a specific value. The thing that felt new to me about the idea I had was that you could have prediction contracts that pay out at times other than the end of their life, though it's unclear to me whether this is actually more expressive than packaged portfolios of binary, payout-then-disappear contracts.
(Portfolios of derivatives that can be traded atomically are nontrivially more useful than only being able to trade one "leg" at a time, and are another thing that exist in traditional finance but mostly don't exist in prediction markets. My impression there, though, is that these composite derivatives are often just a marketing ploy by banks to sell clients things that are tricky to price accurately, so they can hide a bigger markup on them; I'm not sure a more co-operative market would bother with them.)