This Sunday at noon PT, Daniel Kokotaljo will be running a meetup focused on the commitment races problem in AI. What is it? How does it relate to broader issues like equilibrium selection, bargaining, and multi-multi alignment? How dire is it? How should we go about trying to solve it?

From Daniel's post:

Consequentialists can get caught in commitment races, in which they want to make commitments as soon as possible. When consequentialists make commitments too soon, disastrous outcomes can sometimes result. The situation we are in (building AGI and letting it self-modify) may be one of these times unless we think carefully about this problem and how to avoid it.

This will be an online meetup, held in LessWrong's Walled Garden in Bayes Hall.

http://garden.lesswrong.com?code=CBjh&event=daniel-kokotaljo-on-commitment-races-in-ai

February 7th, 12pm PT

Posted on:

17

New Comment
1 comment, sorted by Click to highlight new comments since:

Google doc from the session: https://docs.google.com/document/d/1qg3nEMjON2JKQVK8VLh2s-CV_Lm3SMB8ihP2snSSWPM/edit