andreas comments on Can we create a function that provably predicts the optimization power of intelligences? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
Then you're arguing that, if your notion of "physically plausible environments" includes a certain class of adversely optimized situations, worst-case analysis won't work because all worst cases are equally bad.
They could all be vaporised by a near super nova or something similar before they have a chance to do anything, yup.