So am I correct in inferring that this program looks for any mathematical correlations in the data, and returns the simplest and most consistent ones?
The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.
If inference to the best explanation is included, we can't do that. We can know when we have exhausted all the prima facie evidence, but we can't know when we have exhausted every possible explanation for it. What you haven't thought of yet, you haven't thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,
This is a useful bit of clarification, and timely.
Would that change if there was a mechanism for describing the criteria for the best explanation?
For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?
We needn't presume that we are not in a simulation, we can evaluate the evidence for it.
How do we not fall into the rabbit hole of finding evidence that we are not in a simulation?
There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument
The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.
Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration.
If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.
What are rationalist presumptions?
I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)
Epistemic rationality
I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?
Instrumental rationality
Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?
Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.
As a corollary, things that have no evidence do not merit belief. We needn't presume that we are not in a simulation, we can evaluate the evidence for it.
The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.
Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?
Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?
The examples you give are strategies employed by organizations trying to deny all knowledge outside of the initiated.
I think most of the organsiation I'm talking about don't have a binary intiate/non-initiate criteria whereby the initiated get access to all knowledge. As people learn more they get access to more knowledge. Most scientologists haven't heard of Xenu. At least that was the case 10 years ago.
If the knowledge is being transmitted outside of the workshops, how do we persuade the suppliants to self-initiate?
LW-Dojo are a way for knowledge to be transmitted outside of workshops. I also think that alumni are generally encouraged to explain knowledge to other people. Peer-to-peer instruction has natural filter that reduce completely passive consumption.
That doesn't mean that inherently impossible to transmit knowledge via writting but it's hard.
That doesn't mean that inherently impossible to transmit knowledge via writting but it's hard.
Agreed. The more I consider the problem, the higher my confidence that investing enough energy in the process is a bad investment for them.
Another romantic solution waiting for the appropriate problem. I should look into detaching from the idea.
You refered to historical techniques that are used. Generally historical groups actually have defenses against lay people accessing knowledge even if those lay people think they are experts and should be able to access the knowledge.
Whether it's sworn secrecy, hiding knowledge in plain sight or simple lies to mislead uninitiated readers, there's a huge toolbox.
I assume transmission is inevitable; given that, segregating the information into lower-error chunks seems like a viable strategy.
Presumably CFAR thinks that their workshop is a low error chunk of consuming their material.
I should amend my assumption to uncontrolled transmission is inevitable. The strategy so far has been to use the workshops, and otherwise decline to distribute the knowledge.
The historical example should be considered in light of what the goals are. The examples you give are strategies employed by organizations trying to deny all knowledge outside of the initiated. Enforcing secrecy and spreading bad information are viable for that goal. CFAR is not trying to deny the knowledge, only to maximize its fidelity. What is the strategy they can use to maximize fidelity in cases where they did not choose to transmit it (like this one)?
Suppose we model everyone who practices state-of-the-art rationality as an initiate, and everyone who wants to read about CFAR's teachings as a suppliant. If the knowledge is being transmitted outside of the workshops, how do we persuade the suppliants to self-initiate? Imposing some sort of barrier, so that it requires effort to access the knowledge - I suggest by dividing the knowledge up, thus modelling the mysteries. We would want the divided content to be such that people who won't practice it disengage rather than consume it all passively.
If CFAR were to provide the content, even in this format, I expect the incentive of people to produce posts like the above would be reduced, likewise for the incentive of people to read such collections.
In retrospect, I should have made it explicit I was assuming everyone involved was a (potential) insider at the beginning.
Do you know what the historical techniques happen to be?
Let's take Maimonidies whose behavior is well described by Leo Strauss. There's a law in the Torah against teaching the secrets in the Torah outside of 1-to-1 teaching. If Leo Strauss is to be believed Maimonidies purpusefully writes wrong things to mislead naive readers and keep advanced knowledge from them.
If CFAR would write purposefully misleading things in their public material to pander down to naive readers and keep advanced knowledge from them, that would produce problems.
the easily misunderstood term should never be used in official communication of any sort.
In the time of the internet don't use words publically that you wouldn't use in official communication.
You have just described the same thing Duncan cited as a concern, only substituted a different motive; I am having trouble coming to grips with the purpose of the example as a result.
I propose that the method of organizing knowledge be considered. The goal is not to minimize the information, but to minimize the errors in its transmission. I assume transmission is inevitable; given that, segregating the information into lower-error chunks seems like a viable strategy.
basic practices must come before advanced practices
We aren't at a point yet where we distinguish "basic" from "advanced" practices. Most of what CFAR teaches is a 4-day workshop. CFAR doesn't try to teach anything that takes a year to understand.
The idea that basics are somehow easy to understand also mistakes a lot about what learning deep knowledge is about. Basics are hard because they are fundamentals and affect a lot.
When dancing Salsa there was the saying: "At congresses beginners take the intermediate classes, intermediates take the advanced classes and the advanced people take the beginners classes".
Today I was at my meditation/movement class and the teacher (with ~15 years in the method and likely much more than 10000 hours of meditation) was saying that she still fails to have a good grasp on the basic of rhythm and that it alludes her.
We aren't at a point yet where we distinguish "basic" from "advanced" practices.
This is a good point; I have assumed that there would eventually be a hierarchy of sorts established. I was allowing for instruction being developed (whether by CFAR or someone else) even down below the levels that are usually assumed in-community. When Duncan says,
Picture throwing out a complete text version of our current best practices, exposing it to the forces of memetic > selection and evolution.
I interpret this to mean even by people who have no experience of thinking-about-thinking at all. As you aptly point out, the fundamentals are very hard - there may be demand for just such materials from future advanced rationalists for exactly that reason. So what I suggest is that the components of the instruction be segregated while retaining clear structure, and in this way minimize the skimming and corruption problems.
That being said, I fully endorse the priority choices CFAR has made thus far, and I do not share the (apparent) intensity of Duncan's concern. I therefore understand if even evaluating whether this is a problem is a low priority.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Equivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem.
Another part is that you don't have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.
Let me try to restate, to be sure I have understood correctly:
We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.
Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?