Squark comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
Earlier you said: "People already suck at telling whether Vitamin D is good for you, yet some people seem to believe that they can have non-negligible confidence about the power and behavior of artificial general intelligence." Now you're making high confidence claims about AGI. Also, I remind you the discussion started from my criticism of the proposed AGI safety protocols. If there is no UFAI risk than the safety protocols are pointless.
Not in ways that have to do with expected utility calculation.
Risk 4 since it corresponds to highest expected utility.
My utility function is bounded (I think) so you can only Pascal-mug me that much.
I have no idea whether it is underfunded. I can try to think about it, but it has little to do with the present discussion.