Like I said, that part is tricky to formalize. But, ultimately, it's an individual choice on the part of the model (and, indirectly, the agent being modeled). I can't formalize what counts as a valid continuation today, let alone in all future societies. So, leave it up to the agents in question.
As for the racism thing: yeah, so? You would rather we encode our own morality into our machine, so that it will ignore aspects of people's personality we don't like? I suppose you could insist that the models behave as though they had access to the entire factual database of the AI (so, at least, they couldn't be racist simply out of factual inaccuracy), but that might be tricky to implement.
As for the racism thing: yeah, so?
Which scenario are you affirming? I'm trying to understand your intention here. Would a racist get to veto a nonracist future version of themself?
I've been reading through this to get a sense of the state of the art at the moment:
http://lukeprog.com/SaveTheWorld.html
Near the bottom, when discussing safe utility functions, the discussion seems to center on analyzing human values and extracting from them some sort of clean, mathematical utility function that is universal across humans. This seems like an enormously difficult (potentially impossible) way of solving the problem, due to all the problems mentioned there.
Why shouldn't we just try to design an average bounded utility maximizer? You'd build models of all your agents (if you can't model arbitrary ordered information systems, you haven't got an AI), run them through your model of the future resulting from a choice, take the summation of their utility over time, and take the average across all the people all the time. To measure the utility (or at least approximate it), you could just ask the models. The number this spits out is the output of your utility function. It'd probably also be wise to add a reflexive consistency criteria, such that the original state of your model must consider all future states to be 'the same person.' -- and I acknowledge that that last one is going to be a bitch to formalize. When you've got this utility function, you just... maximize it.
Something like this approach seems much more robust. Even if human values are inconsistent, we still end up in a universe where most (possibly all) people are happy with their lives, and nobody gets wireheaded. Because it's bounded, you're even protected against utility monsters. Has something like this been considered? Is there an obvious reason it won't work, or would produce undesirable results?
Thanks,
Dolores