All of sh4dow's Comments + Replies

sh4dow
30

If you press a thermometer against the flywheel, it will pretty quickly heat up... It just behaves like a "white body" in terms of radiation.

sh4dow
10

But isn't it still possible that a simulation that lost its consciousness would still retain memories about consciousness that were sufficient, even without access to real consciousness, to generate potentially even 'novel' content about consciousness?

2nshepperd
That's possible, although then the consciousness-related utterances would be of the form "oh my, I seem to have suddenly stopped being conscious" or the like (if you believe that consciousness plays a causal role in human utterances such as "yep, i introspected on my consciousness and it's still there"), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
sh4dow
10

I would play lotto: if I win more than 10M$, I take the black box and leave. Otherwise I'd look in the black box: if it is full, I also take the small one. If not, I leave with just the empty black box. As this should be inconsistent, assuming a time traveling Omega, it would either make him not choose me for his experiment or let me win for sure (assuming time works in similar ways as in HPMOR). If I get nothing, it would prove the Omega wrong (and tell me quite a bit about how the Omega (and time) works). If his prediction was correct though, I win 11.000.000$, which is way better than either 'standard' variant.

1hairyfigment
While that sounds clever at first glance: * We're not actually assuming a time-traveling Omega. * Even if we were, he would just not choose you for the game. You'd get $0, which is worse than causal decision theory.