Be happy that people have died and sad that they remain alive (same qualifiers as before: person is not suffering so much that even nothingness is preferable, etc.) and the reverse for people who they don't like
Hmmm.
What is known is that people who go to the afterlife don't generally come back (or, at least, don't generally come back with their memories intact). Historical evidence strongly suggests that anyone who remains alive will eventually die... so remaining alive means you have more time to enjoy what is nice here before moving on.
So, I don't imagine this would be the case unless the afterlife is strongly known to be significantly better than here.
Want to kill people to benefit them (certainly, we could improve a lot of third world suffering by nuking places, if they have a bad life but a good afterlife. Note that the objection "their culture would die out" would not be true if there is an afterlife.)
Is it possible for people in the afterlife to have children? It may be that their culture will quickly run out of new members if they are all killed off. Again, though, this is only true if the afterlife is certain to be better than here.
In the case of people who oppose abortions because fetuses are people (which I expect overlaps highly with belief in life after death), be in favor of abortions if the fetus gets a good afterlife
Be less willing to kill their enemies the worse the enemy is
Both true if and only if the afterlife is known to be better.
Do extensive scientific research trying to figure out what life after death is like.
People have tried various experiments, like asking people who have undergone near-death experiences. However, there is very little data to work with and I know of no experiment that will actually give any sort of unambiguous result.
Genuinely think that having their child die is no worse than having their child move away to a place where the child cannot contact them
And where their child cannot contact anyone else who is still alive, either. Thrown into a strange and unfamiliar place with people who the parent knows nothing about. I can see that making parents nervous...
Drastically reduce how bad they think death is when making public policy decisions; there would be still some effect because death is separation and things that cause death also cause suffering, but we act as though causing death makes some policy uniquely bad and preventing it uniquely good
Exile is also generally considered uniquely bad; and since the dead have never been known to return, death is at the very least a form of exile that can never be revoked.
Not oppose suicide
...depends. Many people who believe in life after death also believe that suicide makes things very difficult for the victim there.
Support the death penalty as more humane than life imprisonment.
Again, this depends; if there is a Hell, then the death penalty kills a person without allowing him much of a chance to try to repent, and could therefore be seen as less humane than life imprisonment.
The worse the afterlife is, the more similar people's reactions will be to a world where there is no afterlife. In the limit, the afterlife is as bad as or worse than nonexistence and people would be as death-averse as they are now. Except that this is contrary to how people claim to think of the afterlife when they assert belief in it. The afterlife can't be good enough to be comforting and still bad enough not to lead to any of the conclusions I described. And this includes being bad for reasons such as being like exile, being irreversible, etc.
And I already said that if there is a Hell (a selectively bad afterlife), many of these won't apply, but the existence of Hell has its own problems.
A putative new idea for AI control; index here.
After working for some time on the Friendly AI problem, it's occurred to me that a lot of the issues seem related. Specifically, all the following seem to have commonalities:
Speaking very broadly, there are two features all them share:
What do I mean by that? Well, imagine you're trying to reach reflective equilibrium in your morality. You do this by using good meta-ethical rules, zooming up and down at various moral levels, making decisions on how to resolve inconsistencies, etc... But how do you know when to stop? Well, you stop when your morality is perfectly self-consistent, when you no longer have any urge to change your moral or meta-moral setup. In other words, the stopping point (and the the convergence to the stopping point) is entirely self-referentially defined: the morality judges itself. It does not include any other moral considerations. You input your initial moral intuitions and values, and you hope this will cause the end result to be "nice", but the definition of the end result does not include your initial moral intuitions (note that some moral realists could see this process dependence as a positive - except for the fact that these processes have many convergent states, not just one or a small grouping).
So when the process goes nasty, you're pretty sure to have achieved something self-referentially stable, but not nice. Similarly, a nasty CEV will be coherent and have no desire to further extrapolate... but that's all we know about it.
The second feature is that any process has errors - computing errors, conceptual errors, errors due to the weakness of human brains, etc... If you visualise this as noise, you can see that noise in a convergent process is more likely to cause premature convergence, because if the process ever reaches a stable self-referential state, it will stay there (and if the process is a long one, then early noise will cause great divergence at the end). For instance, imagine you have to reconcile your belief in preserving human cultures with your beliefs in human individual freedom. A complex balancing act. But if, at any point along the way, you simply jettison one of the two values completely, things become much easier - and once jettisoned, the missing value is unlikely to ever come back.
Or, more simply, the system could get hacked. When exploring a potential future world, you could become so enamoured of it, that you overwrite any objections you had. It seems very easy for humans to fall into these traps - and again, once you lose something of value in your system, you don't tend to get if back.
Solutions
And again, very broadly speaking, there are several classes of solutions to deal with these problems: