torekp comments on Rodney Brooks talks about Evil AI and mentions MIRI [LINK] - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (6)
I place most of my probability weighting on far-future AI too, but I would not endorse Brooks's call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.