gwern comments on Computation Hazards - Less Wrong

14 Post author: Alex_Altair 13 June 2012 09:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 14 June 2012 01:15:38AM 1 point [-]

If your consequentialist ethics cares only about suffering sentient beings, then unless the simulations can affect the simulating agent in some way and render its actions less optimal, creating suffering beings is the only way there can be computation hazards.

If your ethics cares about other things like piles made of prime-numbered rocks, then that's a computation hazard; or if the simulations can affect the simulator, that obviously opens a whole kettle of worms.

(For example, there's apparently a twisty problem of 'false proofs' in the advanced decision theories where simulating a possible proof makes the agent decide to take a suboptimal choice; or the simulator could stumble upon a highly optimized program which takes it over. I'm sure there are other scenarios like that I haven't thought of.)

Comment author: JGWeissman 14 June 2012 01:32:37AM 0 points [-]

If your consequentialist ethics cares only about suffering sentient beings, then unless the simulations can affect the simulating agent in some way and render its actions less optimal, creating suffering beings is the only way there can be computation hazards.

Agreed. The sentence I quoted seemed to indicate that Alex thought he had a counterexample, but it turns out we were just using different definitions of "computational hazards".

Comment author: Alex_Altair 20 June 2012 08:06:35PM 0 points [-]

The only counterexample I can think of is where the computation invents cures or writes symphonies and, in the course of computation, indifferently disposes of them. This could be considered a large negative consequence of "mere" computation, but yeah, not really.