I think creating sentience is a much easier project than FAI, especially proven FAI. We've got plenty of examples of sentience.
This is why I was hesitant to fully agree with your prediction that any sentient programs created by humans have intrinsic moral weight. Sentient uFAI can have neutral or even negative moral weight (although this is a subjective value.)
The main reason this outcome could be unlikely is that most of the reasons that a created GAI would fail to be an FAI would also obliterate the potential for sentience.
I didn't think about the case of a sentient UFAI-- I should think that self-defense would apply, though I suppose that self-defense becomes a complicated issue if you're a hard-core utilitarian.
In this video, long about 48:00, Eliezer talks about uploading and about how it wouldn't be murder if his meat body were anesthetized before the upload and killed without regaining consciousness.
It's arguable that it wouldn't be murder, but I'm not clear about why Eliezer would want to do it that way. I've got some guesses about why one might want to not let the meat body wake up (legal and practical complications of a double but diverging identity, the meat version feeling hopelessly envious), but I'm not sure whether either of them apply.
On the other hand, I can think of a couple of reasons for *not* eliminating the meat version-- one is that two Eliezers would presumably be better than one, though I don't have a strong intuition about the optimum number of Eliezers. The other, which I consider to be more salient, is that the meat version is a backup in case the upload isn't as good as hoped.
More generally, what would folks here consider to be good enough evidence that uploading was worth doing?