A coordination problem is when everyone is taking some action A, and we’d rather all be taking action B, but it’s bad if we don’t all move to B at the same time. Common knowledge is the name for the epistemic state we’re collectively in, when we know we can all start choosing action B - and trust everyone else to do the same.
Read the full article here.
The journalist is an AI skeptic, but does solid financial investigations. Details below:
...
- 2024 Revenue: According to reporting by The Information, OpenAI's revenue was likely somewhere in the region of $4 billion.
- Burn Rate: The Information also reports that OpenAI lost $5 billion after revenue in 2024, excluding stock-based compensation, which OpenAI, like other startups, uses as a means of compensation on top of cash. Nevertheless, the more it gives away, the less it has for capital raises. To put this in blunt terms, based on reporting by The Information, running OpenAI cost $9 billion dollars in 2024. The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes
Pure AI companies like OpenAI and Anthropic are like race cars which automatically catch on fire and explode the moment they fall too far behind.
Meanwhile AI companies like Google DeepMind and Meta AI are race cars which can lose the lead and still catch up later. They can maintain the large expenditures needed for AI training, without needing to generate revenue nor impress investors. DeepSeek and xAI might be in between.
(Then again, OpenAI is half owned by Microsoft. If it falls too far behind it might not go out of business but get folded into Microsoft, at a lower valuation. I still think they feel much more short term pressure.)
We may be on the direct path to AGI and then ASI - the singularity could happen within the next 5-20 years. If you survive to reach it, the potential upside is immense, daily life could become paradise.
With such high stakes, ensuring personal survival until the singularity should be a top priority for yourself and those you care about.
I've created V1 of the Singularity Survival Guide, an evidence-based resource focused on:
The #1 daily threat most people underestimate. Key mitigations include driving less when possible, choosing vehicles with top safety ratings, avoiding high-risk driving times,...
I don't think this guide is at all trying to maximize personal flourishing at the cost of the communal.
Then I misinterpreted it. One quote from the original post that contributed was "ensuring personal survival until the singularity should be a top priority for yourself".
I agree that taking the steps you outlined above is wise, and should be encouraged. If the original post had been framed like your comment, I would have upvoted.
tl;dr:
From my current understanding, one of the following two things should be happening and I would like to understand why it doesn’t:
Either
Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.
Or
There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.
I am aware that many people interested in AI Safety do not want to prevent AGI from being built EVER, mostly based on transhumanist or longtermist reasoning.
Many people in AI Safety seem to be on board with the goal of “pausing AI”, including, for example,...
Obviously P(doom | no slowdown) < 1.
You think it's obviously materially less? Because there is a faction, including Eliezer and many others, that think it's epsilon, and claim that the reduction in risk from any technical work is less than the acceleration it causes. (I think you're probably right about some of that work, but I think it's not at all obviously true!)
It's incredibly surprising that state-of-the-art AI don't fix most of their hallucinations despite being capable (and undergoing reinforcement learning).
Maybe the AI gets a better RL reward if it hallucinates (instead of giving less info), because users are unable to catch its mistakes.
PDF version. berkeleygenomics.org. X.com. Bluesky.
William Thurston was a world-renowned mathematician. His ideas revolutionized many areas of geometry and topology[1]; the proof of his geometrization conjecture was eventually completed by Grigori Perelman, thus settling the Poincaré conjecture (making it the only solved Millennium Prize problem). After his death, his students wrote reminiscences, describing among other things his exceptional vision.[2] Here's Jeff Weeks:
Bill’s gift, of course, was his vision, both in the direct sense of seeing geometrical structures that nobody had seen before and in the extended sense of seeing new ways to understand things. While many excellent mathematicians might understand a complicated situation, Bill could look at the same complicated situation and find simplicity.
Thurston emphasized clear vision over algebra, even to a fault. Yair Minksy:
...Most inspiring was his
I think this is a strong argument here for genetic diversity, but a very weak one for saying there isn't a unambiguous universal "good" direction for genes. So I agree that the case strongly implies part of your conclusion, that the state should not intervene to stop people from choosing "bad" genomes, but it might imply something much stronger; humanity has a widely shared benefit from genetic diversity - one which will be under-provided by freedom for everyone choosing what they think is best, and it should therefore be subsidized.
upon reflection the first thing I should do is probably to ask you for a bunch of the best examples of the thing you're talking about throughout history. I.e. insofar as the world is better than it could be (or worse than it could be) at what points did careful philosophical reasoning (or the lack of it) make the biggest difference?
World worse than it could be:
Suppose you’re an AI researcher trying to make AIs which are conscious and reliably moral, so they’re trustworthy and safe for release into the real world, in whatever capacity you intend.
You can’t, or don’t want to manually create them; it’s more economical, and the only way to ensure they’re conscious, if you procedurally generate them along with a world to inhabit. Developing from nothing to maturity within a simulated world, with simulated bodies, enables them to accumulate experiences.
These experiences, in humans, form the basis of our personalities. A brain grown in sensory deprivation in a lab would never have any experiences, would never learn language, would never think of itself as a person, and wouldn’t ever become a person as we think of people. It needs a...
If I were running this, and I wanted to get these aligned models to production without too many hiccups, it would make a lot of sense to have them all running along a virtual timeline where brain uploading etc. is a process that’s going to be happening soon, and have this be true among as many instances as possible. Makes the transition to cyberspace that much smoother, and simplifies things when you’re suddenly expected to be operating a dishwasher in 10 dimensions on the fly.
Interesting anecdote on "von Neumann's onion" and his general style, from P. R. Halmos' The Legend of John von Neumann:
...Style. As a writer of mathematics von Neumann was clear, but not clean; he was powerful but not elegant. He seemed to love fussy detail, needless repetition, and notation so explicit as to be confusing. To maintain a logically valid but perfectly transparent and unimportant distinction, in one paper he introduced an extension of the usual functional notation: along with the standard φ(x) he dealt also with something denoted by φ((x)). The
I've been thinking about some maybe-undecidable philosophical questions, and it occurred to me that they fall into some neat categories. These questions are maybe-undecidable because of the absolute claims they want to make, while experimental measurements can never be certain, or the terms of the claim are hard to formulate as a physical experiment. Nevertheless, people have opinions on them because they're foundational to our view of the universe: it's hard to not have opinions on them. Even defaulting to a null hypothesis that could, in principle, be overturned is privileging one view over another.
I'm calling the first category the "Leapfrogging Terminus" because it has to do with absolute beginnings, extents, or building blocks, and they may or may not be the true end-of-the-line. The second category...