Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
But aren't you just setting up a system that values states of the world based on the feelings they contain? How does that make any more sense?
You're arguing as though neurological reward maximization is the obvious goal to fall back to if other goals aren't specified coherently. But people have filled in that blank with all sorts of things. "Nothing matters, so let's do X" goes in all sorts of zany directions.
I would. I'd want to do some shorter test runs first though, to get used to the idea, and I'd want to be sure I was in a good mood for the main reset point.
It would probably be good to find a candidate who was enlightened in the buddhist sense, not only because they'd be generally calmer and more stable, but specifically because enlightenment involves confronting the incoherent naïve concept of self and understanding the nature of impermanence. From the enlightened perspective, the peculiar topology of the resetting subjective experience would not be a source of anxiety.
Q: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Stan Franklin: Proofs occur only in mathematics.
This seems like a good point, and something that's been kind of bugging me for a while. It seems like "proving" an AI design will be friendly is like proving a system of government won't lead to the economy going bad. I don't understand how it's supposed to be possible.
I can understand how you can prove a hello world program will print ...
There are no known structures in conway's game of life that are robust. Even eaters, which are used to soak up excess gliders, only work when struck from specific directions.
If you had a life board which was extremely sparsely populated, it's possible that a clever agent could send out salvos of gliders and other spaceships in all directions, in configurations that would stop incoming projectiles, inform it about the location of debris, and gradually remove that debris so that it would be safe to expand.
At a 50% density, the agent would need to start with ...