I feel kind of conflicted about including Daniel Kahneman on this paper. I feel like by the standards of scientific publication, you can't really have a co-author who wasn't alive or available to give their consent to the contents of the paper when the final version was released. Like, I don't know what edits were made to this paper since Kahneman's death, and it was of course not possible to get his consent for any of these edits.
Plausible the paper was actually mostly written when they got Daniel Kahneman's input, but I do assign some probability to it now saying things that Daniel would not have actually endorsed, and that seems bad.
Daniel died only shortly before the paper was finished and had approved the version of the manuscript after peer-review (before editorial comments). I.e., he has approved all substantial content. Including him seemed like clearly the right thing to me.
Supposing he was a serious contributor to the paper (which seems unlikely IMO), it seems bad to cut his contribution just because he died.
So, I think the right choice here will depend on how much being an author on this paper is about endorsement or about contribution.
(Even if he didn't contribute much of the content, I still think it might be fine to keep him as an author.)
It's unfortunate that authorship can mean these two pretty different things.
Yeah, I agree. I do think it's unlikely he was a major contributor to his paper, so it's more about endorsement. Agree that if someone did serious work on a paper and then dies, they should probably still be included (though IMO they should be included with an explicit footnote saying they died during the writing of the paper and might not endorse everything in the final version).
This is a valid concern, but I'm fairly certain Science (the journal - not the field) handled this well. Largely because they're somewhat incentivized to do so (otherwise it could have very bad optics for them) and must have experienced this several times before. I also happen to know one of the senior authors who is significantly above average in conscientiousness.
In a new Science paper, the authors provide concise summaries of AI risks and offer recommendations for governments.
I think the piece is quite well-written. It concisely explains a lot of relevant arguments, including arguments about misalignment and AI takeover. I suspect this is one of the best standalone pieces to help people understand AI risks and some (IMO reasonable) governance interventions.
The piece also has a very respectable cast of authors, including Bengio and Hinton. (Not to say that this fact should affect your assessment of whether its claims are true. Mentioning it because it will affect how some audiences– EG policymakers– interpret the piece.)
Some relevant quotes below:
Explanation of AGI & importance of preparing for AGI risks
Explanation of misalignment & AI takeover risks
Calls for governance despite uncertainty
Government insight
Safety Cases
Mitigation measures