EvenLessWrong

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Consider the teleporter as a machine that does two things: deconstructs an input i and constructs an output o. 
If you divide the machine logically into these two functions, d and c, which are responsible for deconstructing and constructing respectively, you have four ways the machine could function or not function:

If neither d or c work, the machine doesn't do anything. 

If d works but c doesn't, the machine definitely kills or destroys the input person. 

If d doesn't work and c does, the machine makes a copy of the person. If a being walked into the machine and found that this happened, the input being would be in my opinion justified in saying that they oppose being deconstructed.

If d works and c works, then we have a functioning teleporter. This is similar to the previous situation, just with "being i" destroyed. I find it hard to believe this is preferable in some way from the perspective of the input being. 

I think there is possibly a good argument we should accept that this leads to some sort of nihilism about the value / coherence of our existence as discrete individuals, but personally, I maintain too much uncertainty to be okay with stepping into a "teleporter" type system that is more novel than going to sleep (which does after all destroy the being that goes to sleep and create a being that wakes up)

This seems to be just a feature of katago, not a feature of all Go AIs. If this also works against implementations of AlphaGo, then you can start to claim that there are no superhuman go AIs.