Vladimir_Nesov comments on AI ontology crises: an informal typology - Less Wrong

6 Post author: Stuart_Armstrong 13 October 2011 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 14 October 2011 09:22:29PM 10 points [-]

More likely hypotheses suggest themselves, such as: doing things that would be good in (potentially unlikely worlds) where value is more easily influenced, amassing resources to better understand whether value can be influenced, or having behavior controlled in apparently random (but quite likely extremely destructive) ways that give a tiny probabilistic edge.

An important point that I think doesn't have a post highlighting it. An AI that only cares about moving one dust speck by one micrometer on some planet in a distant galaxy if that planet satisfies a very unlikely condition (and thus most likely isn't present in the universe) will still take over the universe on the off-chance that the dust speck is there.