TLDR: Last year, Vael Gates interviewed 97 AI researchers about their perceptions on the future of AI, focusing on risks from advanced AI. Among other questions, researchers were asked about the alignment problem, the problem of instrumental incentives, and their interest in AI alignment research. Following up after 5-6 months, 51% reported the interview had a lasting effect on their beliefs. Our new report analyzes these interviews in depth. We describe our primary results and some implications for field-building below. Check out the full report (interactive graph version), a complementary writeup describing whether we can predict a researcher’s interest in alignment, and our results below!
[Link to post on the EA Forum]
Overview
This report (interactive graph version) is a... (read 1807 more words →)