Riothamus comments on Open thread, Jul. 25 - Jul. 31, 2016 - Less Wrong

3 Post author: MrMind 25 July 2016 07:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: Arielgenesis 27 July 2016 04:14:00AM 2 points [-]

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?

Instrumental rationality

Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?

Comment author: Riothamus 27 July 2016 02:40:58PM 0 points [-]

Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.

As a corollary, things that have no evidence do not merit belief. We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.

Comment author: TheAncientGeek 02 August 2016 03:05:40PM 0 points [-]

As a corollary, things that have no evidence do not merit belief.

Does evidence have to be direct evidence? Or can something like inference to the best explanation be included?

We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

That is exactly the sort of situation where direct evidence is useless.

Comment author: Arielgenesis 28 July 2016 06:21:01AM 0 points [-]

We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

How do we not fall into the rabbit hole of finding evidence that we are not in a simulation?

Comment author: Riothamus 28 July 2016 05:24:57PM 1 point [-]

There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument

The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.

Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration.

If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.

Comment author: TheAncientGeek 02 August 2016 03:08:31PM *  0 points [-]

The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.

If inference to the best explanation is included, we can't do that. We can know when we have exhausted all the prima facie evidence, but we can't know when we have exhausted every possible explanation for it. What you haven't thought of yet, you haven't thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,

Comment author: Riothamus 09 August 2016 06:47:54PM 0 points [-]

This is a useful bit of clarification, and timely.

Would that change if there was a mechanism for describing the criteria for the best explanation?

For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?

Comment author: TheAncientGeek 16 August 2016 11:52:01AM *  0 points [-]

Equivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem.

Another part is that you don't have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.

Comment author: Riothamus 18 August 2016 01:55:22PM 0 points [-]

Let me try to restate, to be sure I have understood correctly:

We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

Comment author: TheAncientGeek 25 August 2016 09:53:20AM *  1 point [-]

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

Maybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber.

[*}An example might be the way reality looks mathematical to physics, which some people are willing to take fairly literally.

Comment author: Riothamus 25 August 2016 02:37:03PM 0 points [-]

Echo chamber implies getting the same information back.

It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.

Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?