Arielgenesis comments on Open thread, Jul. 25 - Jul. 31, 2016 - Less Wrong

3 Post author: MrMind 25 July 2016 07:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: Arielgenesis 28 July 2016 06:21:01AM 0 points [-]

We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

How do we not fall into the rabbit hole of finding evidence that we are not in a simulation?

Comment author: Riothamus 28 July 2016 05:24:57PM 1 point [-]

There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument

The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.

Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration.

If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.

Comment author: TheAncientGeek 02 August 2016 03:08:31PM *  0 points [-]

The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.

If inference to the best explanation is included, we can't do that. We can know when we have exhausted all the prima facie evidence, but we can't know when we have exhausted every possible explanation for it. What you haven't thought of yet, you haven't thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,

Comment author: Riothamus 09 August 2016 06:47:54PM 0 points [-]

This is a useful bit of clarification, and timely.

Would that change if there was a mechanism for describing the criteria for the best explanation?

For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?

Comment author: TheAncientGeek 16 August 2016 11:52:01AM *  0 points [-]

Equivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem.

Another part is that you don't have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.

Comment author: Riothamus 18 August 2016 01:55:22PM 0 points [-]

Let me try to restate, to be sure I have understood correctly:

We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

Comment author: TheAncientGeek 25 August 2016 09:53:20AM *  1 point [-]

Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?

Maybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber.

[*}An example might be the way reality looks mathematical to physics, which some people are willing to take fairly literally.

Comment author: Riothamus 25 August 2016 02:37:03PM 0 points [-]

Echo chamber implies getting the same information back.

It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.

Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?

Comment author: TheAncientGeek 28 August 2016 12:08:42PM *  0 points [-]

Without having a way of ranging across ontologyspace, how can we distinguish the merits of different ontologies? But we don't have such a way. In its absence, we can pursue an ontology to the point of breakdown, whereupon we have no clear path onwards. It can also be a slow of process ... it took centuries for scholastic philosophers to reach that point with the Aristotelian framework.

Alternatively, if an ontology works, that is no proof that it ia the best possible ontology, or the final answer...again because of the impossibility of crawling across ontologyspace.

Comment author: Riothamus 29 August 2016 02:29:02PM 0 points [-]

This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.

  1. We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
  2. We don't have a way of searching for new ontologies.

So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.