Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Anon1400

Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?

Anon1410

how very hard it is to stay in a state of confessed confusion, without making up a story that gives you closure

Is there a "heuristics and biases" term for this?

Anon1440

To put it another way, everyone knows that harms are additive.

Is this one of the intuitions that can be wrong, or one of those that can't?