timtyler comments on Q&A with Shane Legg on risks from AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
If so, they keep pretty quet about it! I expect for them it would be "more convenient" if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the "save the world" funding pitch.
I think it's epistemicly dangerous to guess at the motivations of "them" when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it's not like they have blogs where they talk about these things. SingInst is still really small and really diverse.
Right - so, to be specific, we have things like this:
I think I have to agree with the Europan Zugs in disagreeing with that.