It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal's mugging and other exploits.
I've had a realization of a subconscious triviality: for me to accept an argument as true, it is not enough that I find no error in it. The argument must also be so structured that I would expect to have found an error if it was invalid (or I myself must make such structured version first). That's how mathematical proofs work - they are so structured that finding an error requires little computational power (only knowledge of rules and reliability); in the extreme case an entirely unintelligent machine can check a proof.
In light of this I propose that those who want to make a persuasive argument should try to structure the argument so it'd be easy to find flaws in it. This also goes for the thought experiments and hypothetical situations. Those seem rather often to be constructed with entirely opposite goal in mind - to obstruct the verification process or to try to prevent the reader from trying to find flaws.
Something else tangentially related to the arguments. The faulty models are the prime cause of decision errors; yet the faulty models are the staple of thought experiment; nobody raises an eyebrow as all models are ultimately imperfect.
However, to accept an argument based on imperfect model one must be capable of correctly propagating the error and estimating the error in the final conclusion, as a faulty model may be so constructed as to itself differ non substantially from the reality but in such a way that the difference diverges massively along the chain of reasoning. My example of this is the Trolley Problems. The faults of original model are nothing out of ordinary; simplified assumptions of the real world, perfect information, etc. Normally you can have those faults in model and still arrive at reasonably close outcome. The end result is throwing of fat people onto tracks, cutting up of travellers for organs, and similar behaviours which we intuitively know we could live a fair lot better without. How that happens? In real world the strongly asymmetrical relations of form 'death of 1 person saves 10 people' are very rare (as an emergent property of complexity of the real world that is lacking in the imaginary worlds of trolley problems), while the decision errors are not nearly so rare, so most of people killed to save others would end up killed in vain.
I don't know how models can be structured as to facilitate propagation of model's error. But it seems to be necessary for arguments based on models to be convincing.
To me singularity and cryonics are two different beasts.
The singularity is too far into the fog of the future to be taken seriously today. Maybe when there are some significant advances in the relevant fields, whenever that might happen. So far, even a single neuron is not fully simulated. The progress might go so many different ways, I find the singularity being just one of the many possible directions, and I know that my imagination is not up to par to even consider most of them.
On the other hand, cryonics is basically a bet that a frozen brain can potentially be fully restored. We already know that single cells can be (sperm banks do it on an industrial scale), and there is some success with organs and even organisms. So we might be only a few steps away from being able to repair the damage done by freezing.
I think this may have been given a down vote because to some a Singularity seems settled and rational. I take it to be the idea that eventually--inevitably--machines will become smart enough to improve themselves, that once they become just a bit smarter than human programmers and with the capacity of self-modification, that it will be a runaway phenomenon.
And that's why FAI is such a big deal: precisely because it's "far into the fog of future" in the sense that we don't have any control on the outcome (unless we align its values very precisely... (read more)