Comment author: timtyler 23 March 2012 12:50:15PM *  1 point [-]

In general, the AIs of any size (excluding the possibility of unlimited computational power within finite time and space) will have to trade accuracy of it's adherence to it's goals, for time, and thus have to implement methods that have different goals, but are faster computationally, whenever those goals are reasoned to increase expected utility taking into consideration the time constraints.

Sure. "Pragmatic goals" - as I refer to them here.

This seems to be a big issue for FAI going FOOM.

Not really. You don't build systems that use the pragmatic goals when modifying themselves.

Comment author: haplo 23 March 2012 01:17:40PM *  0 points [-]

Doesn't this merely meta sidestep the issue? Now what the AI needs to do is modify itself to use pragmatic goals when modifying itself, to any level of recursion, and then the situation becomes unified with Dmytry's concern.

If you were to make even considering such solutions have severely negative utility and somehow prevent pragmatic modifications you are effectively reducing its available solution space. The FAI has a penalty that an uFAI doesn't. Potential loss to an uFAI may always have a higher negative utility than becoming more pragmatic. The FAI may evolve not to eliminate its ideals but it may evolve to become more pragmatic, simplifying situations for easier calculation else be severely handicapped against an uFAI.

How can you prove that in time the FAI does not reduce to an uFAI or be quickly rendered to a less steep logistic growth wrt uFAI? That its destiny is not to be come an antihero, a Batman whose benefits are not much better than its consequences?