The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Yes, if it has compressible preferences, which in reality is the case for e.g. humans and many plausible AIs.
In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.
I think that depends upon the structure of reality. Maybe there will be a series of philosophical shocks as severe as the physicality of mental states, Big Worlds, quantum MWI, etc. Suspicion should definitely be directed at what horrors will be unleashed upon a human or AI that discovers a correct theory of quantum gravity.
Just as Big World cosmology can erode aggregative consequentialism, maybe the ultimate nature of quantum gravity will entirely erode any rational decision-making; perhaps some kind of ultimate ensemble theory already has.
On the other hand, the idea of a one-time shock is also plausible.
I believe even personal identity falls under this category. A lot of moral intuitions work with the-me-in-the-future object, as marked in the map. To follow these intuitions, it is very important for us to have a good idea of where the-me-in-the-future is, to have a good map of this thing. When you get to weird thought experiments with copying, this epistemic step br... (read more)