# shminux comments on Computation Hazards - Less Wrong Discussion

13 13 June 2012 09:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 14 June 2012 01:07:58AM 2 points [-]

Any such algorithm for detecting suffering in arbitrary Turing machines would seem to run afoul of Turing/Rice; a heuristic algorithm could probably either reject most suffering Turing machines (but is that acceptable enough?) or reject all suffering Turing machines (but does that cripple the agent from a practical standpoint?), and such a heuristic might take more processing power than running the Turing machines in question...

Comment author: 14 June 2012 02:03:13AM -3 points [-]

A few points:

• Simulated humans are not arbitrary Turing machines.

• To make any progress toward FAI, one has to figure out how to define human suffering, including simulated human suffering. It might not be easy, but I see it as an unavoidable step. (Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.)

• Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential "suffering" without feeling it.

• Real humans don't take a lot of CPU cycles to identify and report suffering, so neither should simulated humans.

• A non-suffering agent might not be as good as one which had loved and lost, but it is certainly much more useful than a blanket prohibition against simulating humans, as proposed in the OP.

Comment author: 14 June 2012 08:50:47PM 1 point [-]

Simulated humans are not arbitrary Turing machines.

Arbitrary Turing machines are arbitrary simulated humans. If you want to cut the knot with a 'human' predicate, that's just as undecidable.

Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.

There we have more strategies. For example, 'prevent any current human from suffering or creating another human which might then suffer'.

Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential "suffering" without feeling it.

Is there a way to do this perfectly without running into undecidability? Even if you had the method, how would you know when to apply it...

Comment author: 14 June 2012 03:24:45AM 0 points [-]

I can't help but think of TRON 2 when considering the ethics of creating simulated humans that are functionally identical to biological humans. For those unfamiliar with the film, a world comprised of data is inherently sufficient to enable the spontaneous generation of human-like entities. The creator of the data world finds the entities too imperfect, and creates a data world version of himself tasked with making the data world perfect according to an arcane definition for 'perfection' the creator himself has not fully formed. The data world version of the creator then begins mass genocide of the entities, creating human-like programs that are merely perfect executions of crafted code to replace them; if the programs exhibit individuality, they are deleted. The movie asserts this genocide is wrong.

If an AI is sufficiently powerful enough to be capable of mass-generating simulations that are functionally identical to a biological human, such that they are capable of original ideas, compassion, and suffering; if an AI can create simulated humans unique enough that their thoughts and actions over thousands of iterations of the same event are not predictable with 100% accuracy; then would it not be generating Homo sapiens sapiens en masse?

If indeed not, then I fail to see why mass creation and subsequent genocide over many iterations is the sort of behaviour mitigators of computational hazards wish to encourage.

Comment author: 14 June 2012 05:28:34AM 0 points [-]

Off topic, but the TRON sequal has at least two distinct friendly AI failures.

Flynn creates CLU and gives him simple-sounding goals, which ends badly.

Flynn's original creation of the grid gives rise to unexpected and uncontrolled intelligence of at least human level.

Comment author: [deleted] 14 June 2012 05:34:16AM *  -1 points [-]

Simulated humans are not arbitrary Turing machines.

We still don't have guaranteed decidability for properties of simulations.

To make any progress toward FAI, one has to figure out how to define human suffering,

There are so many problems in FAI that have nothing to do with defining human suffering or any other object level moral terms. Metaethics, goal invariant self modification, value learning and extrapolation, avoiding wireheading, self deception, blackmail, self fulfilling prophecies, representing logical uncertainty correctly, finding a satisfactory notion of truth, and many more.

Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible

This sounds like an appeal to consequences, but putting that aside: Undecidability is a limitation of minds in general, not just FAI, and yet, behold!, quite productive, non-oracular AI researchers exist. Do you know that we can compute uncomputable information? Don't declare things impossible so quickly. We know that friendlier-than-torturing-everyoine AI is possible. No dream of FAI should fall short of that, even if FAI is "impossible".

Real humans don't take a lot of CPU cycles to identify and report suffering, so neither should simulated humans.

Even restricting simulated minds to things that looks like present humans, what makes you think that humans have any general capacity to recognize their own suffering? Most mental activity is not consciously perceived.