There's a similar guideline in the software world:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
Your first point seems interesting. How specifically should we go about structuring arguments to make flaws easy to find?
To me the solution to this problem is to not rely too much on raw consequentialism for dealing with real-life situation. Because I know my model of the world is in perfect, that I lack computing power to track all the consequences of an action and evaluate their utility, because I don't even know my own utility function precisely.
So I'm trying to devise ethical rules that come partly from consequentialism, but also taking into consideration lessons learned from history, both my own personal experience and humanity's history. And those rules for example say I should not kill someone, even if I think it'll save 10 lives, because usually when you do that, either you kill the person and fail to save the 10 others, or you failed to think to a way to save the 10 without killing one, or you create far-reaching consequences that'll at the end cost more than the 10 saved lives (for example, breaking the "don't kill" taboo, and leading for people to follow your example even in cases when they'll fail to save the 10 persons). That's less optimal than using consequentialism wisely - but also much less error-prone, at least to me, than trying to wield a dangerous tool that I'm not sm...
The faulty models are the prime cause of decision errors; yet the faulty models are the staple of thought experiment...
The purpose of thought experiment is to analyse our theories in extreme situations. The understanding this gives can then be useful in non-extreme situations. An analogy to mathematics: When graphing y=x/(x^2+1) it is useful to consider the value of y as x goes to infinity, even if we only need a sketch for -2<x<2. Trolley Problems allow us to focus on the conflict between intuition and utilitarianism. The understanding thought ex...
similar behaviours which we intuitively know we could live a fair lot better without
The situations presented are indeed such ones that we could live better without, but the whole point of thought experiments is to construct the worst possible world, and find a way to decide that works even under those circumstances. By your logic, we could easily end up saying "it's useless to argue about how one or two electrons behave, real world objects have much more of them... and anyway, tunneling effects are so weird and unintuitive that we surely have a wr...
The issues brought up here impinge upon a concern of my own regarding issues that many Less Wrongers accept.
I don't know if I can let myself invest belief in things such as the Singularity (after I figure out what I mean) or cryonics without working some calculations out for myself.
The problem is that I don't know how to do that; and moreover, if I did, I'm worried that trying to do that would have so many possible points of error (and things to overlook) as to invariably give me overconfidence in whatever I was already leaning towards, with an anchoring bias towards acceptance given the posts on LW. Does anyone have any thoughts on this?
I like the first argument more than the second.
"How do we know that not donating to this one poor child with a rare cancer will actually save a bunch of Africans?" would be justified, even though if you're going for lives saved you really should just go with the Africans. Now, it helps that we have metrics, but you still have to decide to do something ever.
This is an excellent point, but wouldn't making a model as transparent as possible require an attention to one's audience? If so, wouldn't that preclude any method or rule about how to make a model transparent?
Is the following a loosely accurate summary: "There are certain types of arguments where the probability of the argument not having any obvious flaws even if it is wrong is high enough that one shouldn't count the arguments as significant evidence for their conclusions."
One approach is to apply the scientific method: every model must have testable predictions. For the case of Pascal's mugging, you can ask the mugger to demonstrate their "magic powers from outside the Matrix" in a benign but convincing way ("Please show me a Turing machine that simulates an amoeba"). If they refuse or are unable to, you move on.
It would seem rational to accept any argument that is not fallacious; but this leads to consideration of problems such as Pascal's mugging and other exploits.
I've had a realization of a subconscious triviality: for me to accept an argument as true, it is not enough that I find no error in it. The argument must also be so structured that I would expect to have found an error if it was invalid (or I myself must make such structured version first). That's how mathematical proofs work - they are so structured that finding an error requires little computational power (only knowledge of rules and reliability); in the extreme case an entirely unintelligent machine can check a proof.
In light of this I propose that those who want to make a persuasive argument should try to structure the argument so it'd be easy to find flaws in it. This also goes for the thought experiments and hypothetical situations. Those seem rather often to be constructed with entirely opposite goal in mind - to obstruct the verification process or to try to prevent the reader from trying to find flaws.
Something else tangentially related to the arguments. The faulty models are the prime cause of decision errors; yet the faulty models are the staple of thought experiment; nobody raises an eyebrow as all models are ultimately imperfect.
However, to accept an argument based on imperfect model one must be capable of correctly propagating the error and estimating the error in the final conclusion, as a faulty model may be so constructed as to itself differ non substantially from the reality but in such a way that the difference diverges massively along the chain of reasoning. My example of this is the Trolley Problems. The faults of original model are nothing out of ordinary; simplified assumptions of the real world, perfect information, etc. Normally you can have those faults in model and still arrive at reasonably close outcome. The end result is throwing of fat people onto tracks, cutting up of travellers for organs, and similar behaviours which we intuitively know we could live a fair lot better without. How that happens? In real world the strongly asymmetrical relations of form 'death of 1 person saves 10 people' are very rare (as an emergent property of complexity of the real world that is lacking in the imaginary worlds of trolley problems), while the decision errors are not nearly so rare, so most of people killed to save others would end up killed in vain.
I don't know how models can be structured as to facilitate propagation of model's error. But it seems to be necessary for arguments based on models to be convincing.