Zach Stein-Perlman

AI strategy & governance. ailabwatch.org. ailabwatch.substack.com

Sequences

Slowing AI

Wikitag Contributions

Load More

Comments

Sorted by

I'm a master artisan of great foresight, you're taking time to do something right, they're a perfectionist with no ability to prioritize. Source: xkcd.

this is evidence that tyler cowen has never been wrong about anything

Two blogs that regularly have some such content are Transformer and Obsolete.

Pitching my AI safety blog: I write about what AI companies are doing in terms of safety. My best recent post is AI companies' eval reports mostly don't support their claims. See also my websites ailabwatch.org and aisafetyclaims.org collecting and analyzing public information on what companies are doing; my blog will soon be the main way to learn about new content on my sites.

I don't understand the footnote.

In 99.9% of cases, the market resolves N/A and no money changes hands. In 0.1% of cases, the normal thing happens.

What's wrong with this reasoning? Who pays for the 1000x?

Yes but this decreases traders' alpha by 99.9%, right? At least for traders who are constrained by number of markets where they have an edge (maybe some traders are more constrained by risk or something).

Domain: AI safety from the perspective of what AI companies are doing and should do

Links: AI Lab Watch and AI Safety Claims Analysis

Author: Zach Stein-Perlman

Type: website

Anthropic's model cards . . . . are substantially more detailed and informative than the model cards of other AI companies.

My weakly-held cached take is: I agree on CBRN/bio (and of course alignment) and I think Anthropic is pretty similar to OpenAI/DeepMind on cyber and AI R&D (and scheming capabilities), at least if you consider stuff outside the model card (evals papers + open-sourcing the evals).

Load More