stoat comments on Open thread, Jul. 25 - Jul. 31, 2016 - Less Wrong

3 Post author: MrMind 25 July 2016 07:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: WhySpace 27 July 2016 11:32:13PM *  3 points [-]

What are rationalist presumptions?

Others have given very practical answers, but it sounds to me like you are trying to ground your philosophy in something more concrete than practical advice, and so you might want a more ivory-tower sort of answer.

In theory, it's best not to assign anything 100% certainty, because it's impossible to update such a belief if it turns out not to be true. As a consequence, we don't really have a set of absolutely stable axioms from which to derive everything else. Even "I think therefore I am" makes certain assumptions.

Worse, it's mathematically provable (via Löb's Theorem) that no system of logic can prove it's own validity. It's not just that we haven't found the right axioms yet; it's that it is physically impossible for any axioms to be able to prove that they are valid. We can't just use induction to prove that induction is valid.

I'm not aware of this being discussed on LW before, but how can anyone function without induction? We couldn't conclude that anything would happen again, just because it had worked a million times before. Why should I listen to my impulse to breathe, just because it seems like it's been a good idea the past thousand times? If induction isn't valid, then I have no reason to believe that the next breath won't kill me instead. Why should I favor certain patterns of twitching my muscles over others, without inductive reasoning? How would I even conclude that persistent patterns in the universe like "muscles" or concepts like "twitching" existed? Without induction, we'd literally have zero knowledge of anything.

So, if you are looking for a fundamental rationalist presumption from which to build everything else, it's induction. Once we decide to live with that, induction lets us accept fundamental mathematical truths like 1+1=2, and build up a full metaphysics and epistemology from there. This takes a lot of bootstrapping, by improving on imperfect mathematical tools, but appears possible.

(How, you ask? By listing a bunch of theorems without explaining them, like this: We can observe that simpler theories tend to be true more often, and use induction to conclude Occam's Razor. We can then mathematically formalize this into Kolmogorov complexity. If we compute the Kolmogrov complexity of all possible hypotheses, we get Solomonof induction, which should be the theoretically optimum set of Bayesian priors. Cruder forms of induction also gives us evidence that statistics is useful, and in particular that Baye's theorem is the optimal ways of updating existing beliefs. With sufficient computing power, we could theoretically perform Bayesian updates on these universal priors, for all existing evidence, and arrive at a perfectly rational set of beliefs. Developing a practical way of approximating this is left as an exercise for the reader.)

No one is really very happy about having to take induction as a leap of faith, but it appears to be the smallest possible assumption that allows for the development of a coherent and broadly practical philosophy. We're making a baseless assumption, but it's the smallest possible assumption, and if it turns out there was a mistake in all the proofs of Löb's theorem and there is a system of logic that can prove it's own validity, I'm sure everyone would jump on that. But induction is the best we have.

Comment author: Arielgenesis 28 July 2016 04:02:51AM 0 points [-]

This, and your links to Lob's theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:

I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most powerful, wild and dangerous of them all: self-referential statement:

*Rationalist presume Occam's razor because it proof itself *Rationalist presume Induction razor because it proof itself *etc.

And a collection of these self-referential statement (if you collect the right elements) would reinforce one another. Upon this collection, the whole field of rationality is built.

To the best of my understanding, this train of thought is nearly identical to the Presuppositionalism school of Reformed Christian Apologetics.

The reformed / Presbyterian understanding of the Judeo-Christian God (from here on simply referred to as God), is that God is a self-referential entity, owing to their interpretation of the famous Tetragrammaton. They believe that God is true for many reasons, but chief among all, is that it attest itself to be the truth.

Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.

The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology and by that point, everyone is rational.

Comment author: stoat 28 July 2016 06:55:41PM 1 point [-]

Eliezer ruminates on foundations and wrestles with the difficulties quite a bit in the Metaethics sequence, for example:

Comment author: Arielgenesis 29 September 2016 04:19:51AM 0 points [-]

Thank you. This reply actually answer the first part of my question.

The 'working' presuppositions include: * Induction * Occam's razor

I will quote most important part from Fundamental Doubts

So, in the end, I think we must allow the use of brains to think about thinking; and the use of evolved brains to think about evolution; and the use of inductive brains to think about induction; and the use of brains with an Occam prior to think about whether the universe appears to be simple; for these things we really cannot unwind entirely, even when we have reason to distrust them. Strange loops through the meta level, I think, are not the same as circular logic.

And this have a lot of similarities with my previous conclusion (with significant differences about circular logic and meta loops)

a non-contradicting collection of self-referential statement that covers the epistemology and axiology