Oh, true for the "uploaded prisoner" scenario, I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.
But even for the "uploaded prisoner", given sufficient time it would be possible - there's no absolute impermeability to information anywhere, is there? And where there's information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.
That is not self-evident to me at all. If you don't control the hardware (and the backups), how exactly would that work? As a parallel, imagine youself as sole mind, without a body. How will your sole mind kill itself?
And where there's information flow, control is surely ultimately possible?
Huh? Of course not. Information is information and control is control. Don't forget that as you accumulate infomation, so do your jailers.
http://www.theatlantic.com/technology/archive/2015/05/immortal-but-damned-to-hell-on-earth/394160/
With such long periods of time in play (if we succeed), the improbable hellish scenarios which might befall us become increasingly probable.
With the probability of death never quite reaching 0, despite advanced science, death might yet be inevitable.
But the same applies also to a hellish life in the meanwhile. And the longer the life, the more likely the survivors will envy the dead. Is there any safety in this universe? What's the best we can do?