Has Eliezer publicly and satisfactorily responded to attempted rebuttals of the analogy to evolution?
I refer to these posts: https://optimists.ai/2023/11/28/ai-is-easy-to-control/ https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer My (poor, maybe mis-) understanding is that the argument is that as SGD optimizes for "predicting the next token" and we select for systems with very low loss by modifying every single parameter in the neural network (which basically defines the network...