Is this proof that only intelligent life favors self preservation?
Joseph Jacks' argument here at 50:08 is:
1) If Humans let Super Intelligences do "whatever they want", they won't try to kill all the Humans (because, they're automatically nice?)
2) If Humans make any (even feeble) attempts to protect themselves from Super Intelligences, then the Super Intelligences can and will will have reason to try to kill all the Humans.
3) Human should definitely build Super Intelligences and let them do whatever they want... what could go wrong? yolo!
P. If humans try to restrict the behavior of a superintelligence, then the superintelligence will have a reason to kill all humans.
Ah yes, the second part of Jacks' argument as I presented it was a bit hyperbolic. (Though, I feel the point stands: he seems to suggest that any attempt to restrict Super Intelligences would "create the conditions for an antagonistic relationship" and will give them a reason to harm Humans). I've updated the post with your suggestion. Thanks for the review and clarification.
Point 3) is meant to emphasize that:
This is, of course, an option that Humans could take. But, the question remains, would this action be likely to allow for acceptable risks to Humans and Human society? Would this action favor Human's self preservation?