In regards to the 'probability assertions' I made, the following (probably) sums it up best: P(solving aging∩doomc∣AGI)+P(doom∣AGI)≈1.
I understand the ethical qualms. The point I was trying to make was more in the line of 'if I can effect the system in a positive direction, could this maximise my/humanity's mean-utility function'. Acknowledging this is a weird way to put it (as I assume a utility-function for myself/humanity), I'd hoped it would provide insight into my thought process.
Note: in the post I didn't specify the ∩doomc part. I'd hoped it was implicit -- as I don't care much for the scenario, where aging is solved and AI enacts doom right afterwards. I'm aware this is still an incomplete model (and is quite non-rigorous).
Thanks for the advice @GeneSmith!
In regards to the 'probability assertions' I made, the following (probably) sums it up best:
P(solving aging∩doomc∣AGI)+P(doom∣AGI)≈1.
I understand the ethical qualms. The point I was trying to make was more in the line of 'if I can effect the system in a positive direction, could this maximise my/humanity's mean-utility function'. Acknowledging this is a weird way to put it (as I assume a utility-function for myself/humanity), I'd hoped it would provide insight into my thought process.
Note: in the post I didn't specify the ∩doomc part. I'd hoped it was implicit -- as I don't care much for the scenario, where aging is solved and AI enacts doom right afterwards. I'm aware this is still an incomplete model (and is quite non-rigorous).
Again, I appreciate the response and the advice;)