6 min read

TLDR: Instead of scrutinizing minor errors, ask what process generated the text in front of you, and then update accordingly. Focus on headlines and source attributions.

If you work in AI safety, I think you should probably read a bit more news than you do now (though giving advice like this is perilous because surely some of you are news junkies who need to hear the opposite message).[1]

In particular, I think you should subscribe to Techmeme and skim the headlines in the daily news roundup. Techmeme is a news aggregator that will collate the biggest tech stories of the day in a convenient email, and—crucially—they write their own headlines to tell you what's actually new in the story. They even break out the AI news into its own section most days, so you can skip over the boring updates about Salesforce or whatever.[2]

In my experience, critical news is slow to diffuse throughout our community. For example, I think it took many days for the Axios report that the US government was planning to gut AISI to reach many people. Similarly, an understanding that pretraining scaling has slowed down took a while to take hold after it was first reported for OpenAI and then the other labs.[3]

Reading the news will help you stay in touch with reality, for example by noticing the pace of AI product releases and the firehose of cash that venture capitalists in Silicon Valley are aiming at AI every day. It will also help you stay in touch with normies and their cares. 

The faster you keep up with the news, the better you can adapt your own plans, intervene in unfolding situations, and steer the still-plastic discourse. And when we all do it, we unlock conversations about topics before they get too stale to be interesting or actionable.

Here's a bad reason not to read the news: you think journalists are unethical (Cade Metz doxxing Scott Alexander, the Guardian's hit piece on Lighthaven, etc). I tend to think these bad articles are the exception, rather than the rule, but even if you thought all journalists were crooks, you can still learn from them, the same way you still listen to [insert person you think is bad but sometimes has useful information].

A slightly better bad reason is that you don't trust the epistemics of journalists. You've experienced Gell-Mann Amnesia, so you think the news is always inaccurate. A few responses:

First, consider whether the articles you read in your domain of expertise are actually wrong or simply vague enough to be accurate. These are different sins. Most journalists aren't writing for experts, so they compromise on some nuance in a way that frustrates expert readers.

I'll probably write more about the psychology and incentives of journalists later, but for now I'll just say that journalists really hate being wrong. They try to vet their sources and get multiple sources for important claims. When they have to make a correction, they gnash their teeth and it ruins their day. They are trying to be accurate, even if that comes at the price of being vague because they're working on a preposterously short deadline.

For example, you might have quibbles with the exact language in the AISI and pretraining scaling articles, but the basic ideas were there, and you could get the most important information just from the headlines—even if you're an expert in AI.

Surely sometimes journalists are just flat-out wrong, but the second response is that there is heterogeneity among journalists, and some are experts that can write for other experts. But most readers are in the habit of ignoring bylines on articles (how many working journalists can you name?) There's a big difference between a young journalist who's new to their beat and the seasoned cybersecurity reporter who's been covering the topic for 30 years. You could figure out who does good reporting and read their articles, blacklist a few baddies, and stick to the headlines for everyone else.

But say you don't buy either of those, and we assume all journalists write nothing but noise. There would still be tons of bits in the news because of the information coming from sources. That is, journalists are talking to the people who are in the know and trying to bring the most important information to you on a low-fidelity channel, but savvy readers can still recover the signal. You just have to use your ✨media literacy✨. 

Let me give you two examples

In a follow-up piece to The Information's article on pretraining slowing down, they wrote:

Now that we know OpenAI isn’t the only artificial intelligence developer seeing a slowdown in improvements using traditional “scaling” methods, it’s worth looking at all the ways companies are trying to make up for that.

. . . 

One way that staff at Google have been trying to eke out gains is by focusing more on settings that determine how a model learns from data during pre-training, a technique known as hyperparameter tuning.

This journalist got demolished on Twitter because duh, everyone has been doing hyperparameter sweeps since the dawn of time. But use your media literacy! What probably happened here was the journalist asked an engineer on Gemini "what are you doing about pretraining scaling plateauing?" and the engineer said "we're revisiting our hyperparameter search to find improvements we overlooked in the past."

And now the dead-obvious statement has become something at least a little useful. You can start asking questions like "huh, I wonder which hyperparameters they think are going to yield improvements?" and "why did they overlook them in the first place?" and "are the other labs reacting the same way?"

Here's a second example. In Scott's recent explainer about the Musk-OpenAI lawsuit, he wrote

I heard rumors of a new offer where the nonprofit keeps a controlling interest, but I can’t find a credible source.

But look at this article from the Financial Times:

Chief executive Sam Altman and other board members are weighing a range of new governance mechanisms after OpenAI converts into a more conventional for-profit company, according to people with direct knowledge of the discussions.

Giving the non-profit’s board outsized voting power would ensure it retained control of the restructured company and was able to over-rule other investors including existing backers such as Microsoft and SoftBank.

Applying media literacy, you should read this as "the journalist probably got a tip that OpenAI's board and lawyers are considering this sort of thing, but their direct sources were unwilling to confirm it with any attribution." Still a rumor, to be sure! But further evidence beyond what Scott mentioned.

Helpfully, journalists try to be transparent by disclosing the number of sources and the sources' relationships to the evidence, to the greatest extent they can while respecting their anonymity. Journalists negotiate with their sources to get the most informative attribution possible because they know it increases credibility with readers. This makes it easier for you to back out the source's information from the article, and update on that.

So even in the worst case for journalistic epistemics, I think you should still read the headlines and check the attributions for the most important claims. Lower your standards for the news from something like "academic papers" to something closer to "decent tweets."

If you're hungry for more AI news after reading Techmeme, here are my recommendations.[4]

Daily:

  • The Information's briefing (generic tech business news, but skim for AI. The Information's only free newsletter.)
  • Matt Levine (occasionally covers intersection of AI and finance, skim for AI.)

Weekly:

  • Transformer (news roundups [ideally for people with some AI familiarity], occasional analysis and exclusive news.)
  • Center for AI Safety (covers a few recent developments, normie friendly.)
    • The more technical ML Safety Newsletter has also been great, apparently it's back for real this time.
  • Center for AI Policy (covers a few recent policy-relevant developments, normie friendly.)
  • MIT Tech Review (a true newsletter: it's a more casual vehicle for MIT Tech's reporting on AI.)
  • Artificial Ignorance by Charlie Guo (roundups and occasional essays.)
  • Understanding AI by Timothy B Lee (reporting from a former tech journalist.)

Less frequent:

  • The Obsolete Newsletter by Garrison Lovely (timely analysis and reporting.)
  • Don't Worry About the Vase by Zvi Mowshowitz (for the love of god, skim. Twitter roundups and takes on recent AI developments. Check out the table of contents and triage from there. I especially like the rhetorical innovation segment. Not friendly for people new to AI safety.)
  • Import AI by Jack Clark (roundups of research from the past couple weeks. Useful for catching papers you might have missed and seeing how [Jack Clark thinks] they connect to safety. Not friendly for people new to AI.)

I read these and more, and while there's significant overlap, every edition covers something the others missed.

  1. ^

    Note that news publications put out plenty of articles that don't count as news in the sense I mean it here. News is when something that wasn't public before is made public. That excludes many think pieces, op-eds, explainers, profiles, features, columns, and newsletters.

  2. ^

    The main downside is that they are showing you media coverage, so they will show you an article about a press release or blog post, but not the press release or blog post itself. If you click through to an article and realize it's just riffing on some other document, you're probably best served just going to the primary source. (Unless the article is more readable or has additional reporting. E.g., a press release about an investment in an AI company might include the size of the investment, but the journalist might have scooped the valuation.)

  3. ^

    Now, maybe people were right to be skeptical of those reports, but I don't think people even knew about them for a while. Some people didn't believe it until Ilya eulogized pretraining scaling laws at Neurips in December, and that's...fair. But to learn that Ilya said it, you might have had to read the news.

  4. ^

    Again, these are only for news, not the other great writing out there. Special shoutout to the Epoch newsletter, which has excellent analysis so far, but is not intended for breaking news.

New to LessWrong?

1.
^

Note that news publications put out plenty of articles that don't count as news in the sense I mean it here. News is when something that wasn't public before is made public. That excludes many think pieces, op-eds, explainers, profiles, features, columns, and newsletters.

2.
^

The main downside is that they are showing you media coverage, so they will show you an article about a press release or blog post, but not the press release or blog post itself. If you click through to an article and realize it's just riffing on some other document, you're probably best served just going to the primary source. (Unless the article is more readable or has additional reporting. E.g., a press release about an investment in an AI company might include the size of the investment, but the journalist might have scooped the valuation.)

3.
^

Now, maybe people were right to be skeptical of those reports, but I don't think people even knew about them for a while. Some people didn't believe it until Ilya eulogized pretraining scaling laws at Neurips in December, and that's...fair. But to learn that Ilya said it, you might have had to read the news.

4.
^

Again, these are only for news, not the other great writing out there. Special shoutout to the Epoch newsletter, which has excellent analysis so far, but is not intended for breaking news.