I disagree pretty strongly with the headline advice here. An ideal response would be to go through a typical sample of stories from some news source - for instance, I keep MIT Tech Review in my feedly because it has surprisingly useful predictive alpha if you invert all of its predictions. But that would take way more effort than I'm realistically going to invest right now, so absent that, I'll just abstractly lay out where I think this post's argument goes wrong.
The main thing most news tells me is what people are talking about, and what people are saying about it. Sadly, "what people are talking about" has very little correlation with what's important, and "what people are saying about it" is overwhelmingly noise, even when true (which it often isn't). In simulacrum terms, news is overwhelmingly a simulacrum 3 thing, and tells me very little about the underlying reality I'm actually interested in.
Sure, maybe there's some useful stuff buried in the pile of junk, but why sift through it? I do not need to know a few days or weeks earlier that AIXI is being gutted. I do not need to know a few weeks earlier that the slowdown OpenAI found in pretraining scaling had been formally reported-upon. (Also David and I had already noticed the signs of OpenAI having noticed that slowdown back in May of 2024, though even if we hadn't suspected until it was reported-upon in November, I still wouldn't need to know about it a few weeks earlier.) Just waiting for Zvi to put it in his newsletter is more than enough.
MIT Tech Review doesn't break much news. Try Techmeme.
Re "what people are talking about"
Sure, the news is biased toward topics people already think are important because you need readers to click etc etc. But you are people, so you might also think that at least some of those topics are important. Even if the overall news is mostly uncorrelated with your interests, you can filter aggressively.
Re "what they're saying about it"
I think you have in mind articles that are mostly commentary, analysis, opinion. News in the sense I mean it here tells you about some event, action, deal, trend, etc that wasn't previously public. News articles might also tell you what some experts are saying about it, but my recommendation is just to get the object-level scoop from the headline and move on.
Re is it worth the time of sifting through
Skimming headlines is fast. Maybe the news tends to be less action-relevant for your research, but I bet AI safety collectively wastes time and misses out on establishing expertise by being behind the news. Reading Zvi's newsletter falls under what I'm advocating for (even though it's mostly that what-people-are-saying commentary, the object-level news still comes through.)
TLDR: Instead of scrutinizing minor errors, ask what process generated the text in front of you, and then update accordingly. Focus on headlines and source attributions.
If you work in AI safety, I think you should probably read a bit more news than you do now (though giving advice like this is perilous because surely some of you are news junkies who need to hear the opposite message).[1]
In particular, I think you should subscribe to Techmeme and skim the headlines in the daily news roundup. Techmeme is a news aggregator that will collate the biggest tech stories of the day in a convenient email, and—crucially—they write their own headlines to tell you what's actually new in the story. They even break out the AI news into its own section most days, so you can skip over the boring updates about Salesforce or whatever.[2]
In my experience, critical news is slow to diffuse throughout our community. For example, I think it took many days for the Axios report that the US government was planning to gut AISI to reach many people. Similarly, an understanding that pretraining scaling has slowed down took a while to take hold after it was first reported for OpenAI and then the other labs.[3]
Reading the news will help you stay in touch with reality, for example by noticing the pace of AI product releases and the firehose of cash that venture capitalists in Silicon Valley are aiming at AI every day. It will also help you stay in touch with normies and their cares.
The faster you keep up with the news, the better you can adapt your own plans, intervene in unfolding situations, and steer the still-plastic discourse. And when we all do it, we unlock conversations about topics before they get too stale to be interesting or actionable.
Here's a bad reason not to read the news: you think journalists are unethical (Cade Metz doxxing Scott Alexander, the Guardian's hit piece on Lighthaven, etc). I tend to think these bad articles are the exception, rather than the rule, but even if you thought all journalists were crooks, you can still learn from them, the same way you still listen to [insert person you think is bad but sometimes has useful information].
A slightly better bad reason is that you don't trust the epistemics of journalists. You've experienced Gell-Mann Amnesia, so you think the news is always inaccurate. A few responses:
First, consider whether the articles you read in your domain of expertise are actually wrong or simply vague enough to be accurate. These are different sins. Most journalists aren't writing for experts, so they compromise on some nuance in a way that frustrates expert readers.
I'll probably write more about the psychology and incentives of journalists later, but for now I'll just say that journalists really hate being wrong. They try to vet their sources and get multiple sources for important claims. When they have to make a correction, they gnash their teeth and it ruins their day. They are trying to be accurate, even if that comes at the price of being vague because they're working on a preposterously short deadline.
For example, you might have quibbles with the exact language in the AISI and pretraining scaling articles, but the basic ideas were there, and you could get the most important information just from the headlines—even if you're an expert in AI.
Surely sometimes journalists are just flat-out wrong, but the second response is that there is heterogeneity among journalists, and some are experts that can write for other experts. But most readers are in the habit of ignoring bylines on articles (how many working journalists can you name?) There's a big difference between a young journalist who's new to their beat and the seasoned cybersecurity reporter who's been covering the topic for 30 years. You could figure out who does good reporting and read their articles, blacklist a few baddies, and stick to the headlines for everyone else.
But say you don't buy either of those, and we assume all journalists write nothing but noise. There would still be tons of bits in the news because of the information coming from sources. That is, journalists are talking to the people who are in the know and trying to bring the most important information to you on a low-fidelity channel, but savvy readers can still recover the signal. You just have to use your ✨media literacy✨.
Let me give you two examples
In a follow-up piece to The Information's article on pretraining slowing down, they wrote:
This journalist got demolished on Twitter because duh, everyone has been doing hyperparameter sweeps since the dawn of time. But use your media literacy! What probably happened here was the journalist asked an engineer on Gemini "what are you doing about pretraining scaling plateauing?" and the engineer said "we're revisiting our hyperparameter search to find improvements we overlooked in the past."
And now the dead-obvious statement has become something at least a little useful. You can start asking questions like "huh, I wonder which hyperparameters they think are going to yield improvements?" and "why did they overlook them in the first place?" and "are the other labs reacting the same way?"
Here's a second example. In Scott's recent explainer about the Musk-OpenAI lawsuit, he wrote
But look at this article from the Financial Times:
Applying media literacy, you should read this as "the journalist probably got a tip that OpenAI's board and lawyers are considering this sort of thing, but their direct sources were unwilling to confirm it with any attribution." Still a rumor, to be sure! But further evidence beyond what Scott mentioned.
Helpfully, journalists try to be transparent by disclosing the number of sources and the sources' relationships to the evidence, to the greatest extent they can while respecting their anonymity. Journalists negotiate with their sources to get the most informative attribution possible because they know it increases credibility with readers. This makes it easier for you to back out the source's information from the article, and update on that.
So even in the worst case for journalistic epistemics, I think you should still read the headlines and check the attributions for the most important claims. Lower your standards for the news from something like "academic papers" to something closer to "decent tweets."
If you're hungry for more AI news after reading Techmeme, here are my recommendations.[4]
Daily:
Weekly:
Less frequent:
I read these and more, and while there's significant overlap, every edition covers something the others missed.
Note that news publications put out plenty of articles that don't count as news in the sense I mean it here. News is when something that wasn't public before is made public. That excludes many think pieces, op-eds, explainers, profiles, features, columns, and newsletters.
The main downside is that they are showing you media coverage, so they will show you an article about a press release or blog post, but not the press release or blog post itself. If you click through to an article and realize it's just riffing on some other document, you're probably best served just going to the primary source. (Unless the article is more readable or has additional reporting. E.g., a press release about an investment in an AI company might include the size of the investment, but the journalist might have scooped the valuation.)
Now, maybe people were right to be skeptical of those reports, but I don't think people even knew about them for a while. Some people didn't believe it until Ilya eulogized pretraining scaling laws at Neurips in December, and that's...fair. But to learn that Ilya said it, you might have had to read the news.
Again, these are only for news, not the other great writing out there. Special shoutout to the Epoch newsletter, which has excellent analysis so far, but is not intended for breaking news.