It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal's mugging and other exploits.
I've had a realization of a subconscious triviality: for me to accept an argument as true, it is not enough that I find no error in it. The argument must also be so structured that I would expect to have found an error if it was invalid (or I myself must make such structured version first). That's how mathematical proofs work - they are so structured that finding an error requires little computational power (only knowledge of rules and reliability); in the extreme case an entirely unintelligent machine can check a proof.
In light of this I propose that those who want to make a persuasive argument should try to structure the argument so it'd be easy to find flaws in it. This also goes for the thought experiments and hypothetical situations. Those seem rather often to be constructed with entirely opposite goal in mind - to obstruct the verification process or to try to prevent the reader from trying to find flaws.
Something else tangentially related to the arguments. The faulty models are the prime cause of decision errors; yet the faulty models are the staple of thought experiment; nobody raises an eyebrow as all models are ultimately imperfect.
However, to accept an argument based on imperfect model one must be capable of correctly propagating the error and estimating the error in the final conclusion, as a faulty model may be so constructed as to itself differ non substantially from the reality but in such a way that the difference diverges massively along the chain of reasoning. My example of this is the Trolley Problems. The faults of original model are nothing out of ordinary; simplified assumptions of the real world, perfect information, etc. Normally you can have those faults in model and still arrive at reasonably close outcome. The end result is throwing of fat people onto tracks, cutting up of travellers for organs, and similar behaviours which we intuitively know we could live a fair lot better without. How that happens? In real world the strongly asymmetrical relations of form 'death of 1 person saves 10 people' are very rare (as an emergent property of complexity of the real world that is lacking in the imaginary worlds of trolley problems), while the decision errors are not nearly so rare, so most of people killed to save others would end up killed in vain.
I don't know how models can be structured as to facilitate propagation of model's error. But it seems to be necessary for arguments based on models to be convincing.
The issues brought up here impinge upon a concern of my own regarding issues that many Less Wrongers accept.
I don't know if I can let myself invest belief in things such as the Singularity (after I figure out what I mean) or cryonics without working some calculations out for myself.
The problem is that I don't know how to do that; and moreover, if I did, I'm worried that trying to do that would have so many possible points of error (and things to overlook) as to invariably give me overconfidence in whatever I was already leaning towards, with an anchoring bias towards acceptance given the posts on LW. Does anyone have any thoughts on this?
To me singularity and cryonics are two different beasts.
The singularity is too far into the fog of the future to be taken seriously today. Maybe when there are some significant advances in the relevant fields, whenever that might happen. So far, even a single neuron is not fully simulated. The progress might go so many different ways, I find the singularity being just one of the many possible directions, and I know that my imagination is not up to par to even consider most of them.
On the other hand, cryonics is basically a bet that a frozen brain can pote... (read more)