Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?

how very hard it is to stay in a state of confessed confusion, without making up a story that gives you closure

Is there a "heuristics and biases" term for this?

To put it another way, everyone knows that harms are additive.

Is this one of the intuitions that can be wrong, or one of those that can't?