Nominull comments on AI ontology crises: an informal typology - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (12)
Why don't we see these crises happening in humans when they shift ontological models? Is there some way we can use the human intelligence case as a model to guide artificial intelligence safeguards?
We do. It's just that:
1) Human minds aren't as malleable as a self-improving AI's so the effect is smaller,
2) After the fact, the ontological shift is perceived as a good thing, from the perspective of the new ontology's moral system. This makes the shifts hard to notice unless one is especially conservative.