Vladimir_Nesov comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (368)
We'd need to do something specific with the world, there's no reason any one person gets to have the privilege, and creating an agent for every human and having them fight it out is probably not the best possible solution.
I don't think that adequately addresses lukeprog's concern. Even granting that one person shouldn't have the privilege of deciding the world's fate, nor should an AI be created for every human to fight it out (although personally I don't think an would-be FAI designer should rule these out as possible solutions just yet), that leaves many other possibilities for how to decide what to do with the world. I think the proper name for this problem is "should_AI_designer", not "should_human", and you need some other argument to justify the position that it makes sense to talk about "should_human".
I think Eliezer's own argument is given here: