I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?
luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.
The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.
nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not really talking about unlimited power here, only humanly unattainable incredible power, at most. So rewinding isn't necessarily an option. (Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.) Lives that are never lived should count morally similarly to how opportunity cost counts in economics. This means that probably, with sufficient optimization power, incredibly much better and worse outcomes are possible than any of the ones we ordinarily consider in our day-to-day actions, but the utilitarian calculation still works out.
roko: It's true that the discussion must be limited by our current ignorance. But since we have a notion of morality/goodness that describes (although imperfectly) what we want, and so far it has not proved to be necessarily incoherent, we should consider what to do based on our current understanding of it. It's true that there are many ways in which our moral/empathic instincts seem irrational or badly calibrated, but so far (as far as I know) each such inconsistency could be understood to be a difference between our CEV and our native mental equipment, and so we should still operate under the assumption that there is a notion of morality that is perfectly correct in the sense that it's invariant under further introspection. This is then the morality we should strive to live by. Now as far as I can tell, most (if not all) of morality is about the well-being of humans, and things (like brain emulations, or possibly some animals, or ...) that are like us in certain ways. Thus it makes sense to talk about morally significant or insignificant things, unless you have some reason why this abstraction seems unsuitable. The notion of "morally significant" seems to coincide with sentience.
But what if there is no morality that is invariant under introspection?
Tim:
Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think) find that a human brain simulation is morally significant, even though there is much that is not clear about the consequences. The same should be true of a consciousness that isn't in fact a simulation of a human, but of course determining what is and what is not conscious is the hard part.
It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves.
Lord:
I don't think there are scientists, who, in their capacity as scientists, debate what constitutes natural and artificial.
Tim:
That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.
Tim:
That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.
Tim:
That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.
Silly typo: I'm sure you meant 4:1, not 8:1.
luzr: You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same? Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. Total simulation is what we do when we don't have anything better.
luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or otherwise change our moralities, when sufficient introspection forces us to do so. For instance, consider how our morality has changed to reject outright slavery; after sufficient introspection, it does not seem consistent with our other values.