Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
eurleif00

Well, this post heavily hints that a system's moral standing is related to whether it is conscious. Elizezer mentions a need to tackle the hard problem of consciousness in order to figure out whether the simulations performed by our AI cause immoral suffering. Those simulations would be basically isolated; their inputs may be chosen based on our real-world requirements, but they don't necessarily correspond to what's actually going on in the real world; and their outputs would presumably be used in aggregate to make decisions, but not pushed directly into the outside world.

Maybe moral standing requires something else too, like self-awareness, in addition to consciousness. But wouldn't there still be a critical instruction in a self-aware and conscious program, where a conscious experience of being self-aware was produced? Wouldn't the same argument apply to any criteria given for moral standing in a deterministic program?

eurleif-10

Here's a reductio ad absurdum against computers being capable of consciousness at all. It's probably wrong, and I'd appreciate feedback on why.

Suppose a consciousness-producing computer program which experiences its own isolated, deterministic world. There must be some critical instruction in the program which causes consciousness to occur; an instruction such that, if we halt the program immediately before it is executed, consciousness will not occur, and if we halt immediately after it is executed, consciousness will occur.

If we halt the program before executing the critical instruction, but save its state, consciousness should still not occur; and if we load the state back up again, and compute the results of executing the critical instruction, consciousness should then occur. It seems obvious enough that this shouldn't stop the program's consciousness, since it is still executed fully, just with a delay in between.

What if we subsequently load the state taken immediately prior to executing the critical instruction onto another computer? Will it produce a second conscious experience, identical to the first? The second computer is executing precisely the same code on precisely the same data as the first, so it seems reasonable to conclude that it will have the same effects. If the second computer doesn't produce consciousness, that would seem to imply the universe has an eternally-persistent memory of every conscious experience which has ever occurred, and uses it to prevent reoccurrences; a pretty bizarre implication

However, if the second computer does produce consciousness, this means that once you've executed a conscious program, causing its conscious experiences to occur a second time has essentially no processing requirement: you just have to execute one instruction in the simplest instruction set you like.

If that doesn't seem weird to you, consider the practical implication: you could print out the memory dump of a conscious program and produce consciousness by simulating the critical instruction by hand. If the program suffers, you could produce real, morally-relevant suffering by performing a single operation on a sheet of paper – and then erase your pencil marks and do it again, producing more suffering. Can consciousness really be so easy to create?