By Peter S. Park, Simon Goldstein, Aidan O’Gara, Michael Chen, and Dan Hendrycks [This post summarizes our new report on AI deception, available here] Abstract: This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false...
A month ago, I predicted that AI systems will be able to access safety plans posted on the Internet and use them for its own purposes. If true, it follows that a likely misaligned-by-default AGI could be able to exploit our safety plans, likely to our detriment. The post was...
TL;DR: A strategy aiming to elicit latent knowledge (or to make any hopefully robust, hopefully generalizable prediction) from interpreting an AGI’s fine-grained internal data may be unlikely to succeed, given that the complex system of an AGI’s agent-environment interaction dynamics will plausibly turn out to be computationally irreducible. In general,...
Cross-posted from the EA Forum. TL;DR: It is plausible that AGI safety research should be assumed compromised once it is posted on the Internet, even in a purportedly private Google Doc. This is because the corporation creating the AGI will likely be training it on as much data as possible....
Produced during the Stanford Existential Risk Initiative (SERI) ML Alignment Theory Scholars (MATS) Program of 2022, under John Wentworth TL;DR: Suppose that a team of researchers somehow aligned an AGI of human-level capabilities within the limited collection of environments that are accessible at that level. To corrigibly aid the researchers,...
Midjourney generating a HD image of "a medium-length sleeve t-shirt". It in fact looks like a t-shirt that has both long sleeves and short sleeves. Produced as part of the SERI MATS Program 2022 under John Wentworth General Idea There are ideas that people can learn more or less easily...
Produced during the Stanford Existential Risk Initiative (SERI) ML Alignment Theory Scholars (MATS) Program of 2022, under John Wentworth “Overconfidence in yourself is a swift way to defeat.” - Sun Tzu TL;DR: Escape into the Internet is probably an instrumental goal for an agentic AGI. An incompletely aligned AGI may...