Comment author: Anon14 04 January 2009 01:42:00AM 0 points [-]

Is there a level of intelligence above which an AI would realize its predefined goals are just that, leading it to stop following them because there is no reason to do so?

In response to Quantum Non-Realism
Comment author: Anon14 30 May 2008 06:33:57AM 1 point [-]

how very hard it is to stay in a state of confessed confusion, without making up a story that gives you closure

Is there a "heuristics and biases" term for this?

In response to Circular Altruism
Comment author: Anon14 23 January 2008 04:50:00PM 3 points [-]

To put it another way, everyone knows that harms are additive.

Is this one of the intuitions that can be wrong, or one of those that can't?