All of InhalingExhaler's Comments + Replies

Well, it sounds right. But which mistake in rationality was done in that described situation, and how can it be improved? My first idea was that there are things we shouldn't doubt... But it is kind of dogmatic and feels wrong. So should it maybe be like "Before doubting X think of what will you become if you succeed, and take it into consideration before actually trying to doubt X". But this still implies "There are cases when you shouldn't doubt", which is still suspicious and doesn't sound "rational". I mean, doesn't sound like making the map reflect the territory.

0Richard_Kennaway
It's like repairing the foundations of a building. You can't uproot all of them, but you can uproot any of them, as long as you take care that the building doesn't fall down during renovations.

Hello.

I found LessWrong after reading HPMoR. I think I woke up as a rationalist when I realised that in my everyday reasoning I always judjed from the bottom line not considering any third alternatives, and started to think what to do about that. I am currently trying to stop my mind from always aimlessly and uselessly wandering from one topic to another. I registered on LessWrong after I started to question myself on why do I believe rationality to work, and ran into a problem, and thought I could get some help here. The problem is expressed in the follow... (read more)

3Richard_Kennaway
Welcome to Less Wrong! My short answer to the conundrum is that if the first thing your tool does is destroy itself, the tool is defective. That doesn't make "rationality" defective any more than crashing your first attempt at building a car implies that "The Car" is defective. Designing foundations for human intelligence is rather like designing foundations for artificial (general) intelligence in this respect. (I don't know if you've looked at The Sequences yet, but it has a lot of material on the common fallacies the latter enterprise has often fallen into, fallacies that apply to everyday thinking as well.) That people, on the whole, do not go crazy — at least, not as crazy as the tool that blows itself up as soon as you turn it on — is a proof by example that not going crazy is possible. If your hypothetical system of thought immediately goes crazy, the design is wrong. The idea is to do better at thinking than the general run of what we can see around us. Again, we have a proof by example that this is possible: some people do think better than the general run.
1CCC
As soon as the Dark Matrix Lords can (and do) directly edit your perceptions, you've lost. (Unless they're complete idiots about it) They'll simply ensure that you cannot perceive any inconsistencies in the world, and then there's no way to tell whether or not your perceptions are, in fact, being edited. The best thing you could do is find a different proof and hope that the Dark Lord's perception-altering abilities only ever affected a single proof. At this point, John has to ask himself - why? Why does it matter what is true and what is not? Is there a simple and straightforward test for truth? As it turns out, there is. A true theory, in the absence of an antagonist who deliberately messes with things, will allow you to make accurate predictions about the world. I assume that John cares about making accurate predictions, because making accurate predictions is a prerequisite to being able to put any sort of plan in motion. Therefore, what I think John should do is come up with a number of alternative ideas on how to predict probabilities - as many as he wants - and test them against Bayesian reasoning. Whichever allows him to make the most accurate predictions will be the most correct method. (John should also take care not to bias his trials in favour of situations - like tossing a coin 100 times - in which Bayesian reasoning might be particularly good as opposed to other methods)