Comment author: buybuydandavis 27 July 2016 12:29:03PM 0 points [-]

The search was so thorough, there could never be any new evidence about what Adam had did before the custody that could be presented in the future.

Belief in absolute, dogmatic claims on the lack of evidentiary value of possible future observations leads to unfalsifiable conclusions.

Eve is irrational to conclude that her inability to conceive of a possible future observation to change her mind means that it is impossible for such an observation to happen.

As an aside, I believe you can make a more sciency argument with recent cosmological theories. There is something about a future state of the universe where all our current evidence for the Big Bang would cease to be observable, and all we could observe was our own galaxy.

Comment author: Arielgenesis 28 July 2016 04:45:23AM 0 points [-]

you can make a more sciency argument with recent cosmological theories

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

Comment author: WhySpace 27 July 2016 11:32:13PM *  3 points [-]

What are rationalist presumptions?

Others have given very practical answers, but it sounds to me like you are trying to ground your philosophy in something more concrete than practical advice, and so you might want a more ivory-tower sort of answer.

In theory, it's best not to assign anything 100% certainty, because it's impossible to update such a belief if it turns out not to be true. As a consequence, we don't really have a set of absolutely stable axioms from which to derive everything else. Even "I think therefore I am" makes certain assumptions.

Worse, it's mathematically provable (via Löb's Theorem) that no system of logic can prove it's own validity. It's not just that we haven't found the right axioms yet; it's that it is physically impossible for any axioms to be able to prove that they are valid. We can't just use induction to prove that induction is valid.

I'm not aware of this being discussed on LW before, but how can anyone function without induction? We couldn't conclude that anything would happen again, just because it had worked a million times before. Why should I listen to my impulse to breathe, just because it seems like it's been a good idea the past thousand times? If induction isn't valid, then I have no reason to believe that the next breath won't kill me instead. Why should I favor certain patterns of twitching my muscles over others, without inductive reasoning? How would I even conclude that persistent patterns in the universe like "muscles" or concepts like "twitching" existed? Without induction, we'd literally have zero knowledge of anything.

So, if you are looking for a fundamental rationalist presumption from which to build everything else, it's induction. Once we decide to live with that, induction lets us accept fundamental mathematical truths like 1+1=2, and build up a full metaphysics and epistemology from there. This takes a lot of bootstrapping, by improving on imperfect mathematical tools, but appears possible.

(How, you ask? By listing a bunch of theorems without explaining them, like this: We can observe that simpler theories tend to be true more often, and use induction to conclude Occam's Razor. We can then mathematically formalize this into Kolmogorov complexity. If we compute the Kolmogrov complexity of all possible hypotheses, we get Solomonof induction, which should be the theoretically optimum set of Bayesian priors. Cruder forms of induction also gives us evidence that statistics is useful, and in particular that Baye's theorem is the optimal ways of updating existing beliefs. With sufficient computing power, we could theoretically perform Bayesian updates on these universal priors, for all existing evidence, and arrive at a perfectly rational set of beliefs. Developing a practical way of approximating this is left as an exercise for the reader.)

No one is really very happy about having to take induction as a leap of faith, but it appears to be the smallest possible assumption that allows for the development of a coherent and broadly practical philosophy. We're making a baseless assumption, but it's the smallest possible assumption, and if it turns out there was a mistake in all the proofs of Löb's theorem and there is a system of logic that can prove it's own validity, I'm sure everyone would jump on that. But induction is the best we have.

Comment author: Arielgenesis 28 July 2016 04:02:51AM 0 points [-]

This, and your links to Lob's theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:

I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most powerful, wild and dangerous of them all: self-referential statement:

*Rationalist presume Occam's razor because it proof itself *Rationalist presume Induction razor because it proof itself *etc.

And a collection of these self-referential statement (if you collect the right elements) would reinforce one another. Upon this collection, the whole field of rationality is built.

To the best of my understanding, this train of thought is nearly identical to the Presuppositionalism school of Reformed Christian Apologetics.

The reformed / Presbyterian understanding of the Judeo-Christian God (from here on simply referred to as God), is that God is a self-referential entity, owing to their interpretation of the famous Tetragrammaton. They believe that God is true for many reasons, but chief among all, is that it attest itself to be the truth.

Now I am not making any statement about rationality or presuppositionalism, but it seems to me that there is a logical veil that we cannot get to the bottom of and it is called self-reference.

The best that we can do is to get a non-contradicting collection of self-referential statement that covers the epistemology and axiology and by that point, everyone is rational.

Comment author: Arielgenesis 27 July 2016 04:14:00AM 2 points [-]

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalist presume and take for granted without further proof. Are there anything that is self evident?

Instrumental rationality

Sometimes a value could derive from other value. (e.g. I do not value monarchy because I hold the value that all men are created equal). But either we have circular values or we take some value to be evident (We hold these truths to be self-evident, that all men are created equal). I think circular values make no sense. So my question is, what are the values that most rationalists agree to be intrinsically valuable, or self evident, or could be presumed to be valuable in and of itself?

Comment author: Pimgd 26 July 2016 09:36:07AM *  1 point [-]

because producing new evidence is not possible anymore.

Okay...

So, say it turns out that, well, Eve is irrational. Somehow.

Now what? Do we go "neener-neener" at her? What's the point? What's the use that you could get out of labeling this behavior irrational?

Suppose Adam dies and is cryo-frozen. During Eve's life, there will be no resuscitation of Adam. Sometime afterward, however, Omega will arrive, deem the problem interesting and simulate Adam via really really really advanced technology.

Turns out he didn't do it.

Is she now rational because, well, turns out she was right after all? Well, no, because getting the right answer for the wrong reasons is not the rational way to go about things (in general, it might help in specific cases if you need to get the answer right but don't care how).

....

Actually, let me just skip over a few paragraphs I was going to write and skip to the end.

You cannot have 100% confidence interval. Because then your belief is set in stone and it cannot change. You can have a googleplex nines if you want, but not 100% confidence.

Fallacy of argument from probability (if it can happen then it must happen) aside; How is it rational to discard a belief you are holding on shaky evidence if you think with near absolute certainty that no more evidence will arrive, ever? What will you do when there is more evidence? (Hint: Meeting Adam's mother at the funeral and hearing childhood stories about what a nice kid he was is more evidence for his character, albeit very weak evidence - and so are studies that show that certain demographics of the timeperiod that Adam lived in had certain characteristics) You gotta update! (I don't think that fallacy I mentioned applies; if it does, we can fix it with big numbers; if you are to hold this belief everywhere, then... the probabilities go up as it turns from "in this situation" to "in at least one of all these situations")

So to toss a belief aside because you think there will be no more evidence is the wrong action to me. You can park a belief. That is to take no action. Maintain status quo. No change in input is no change in output. But you do NOT clear the belief.

Let me put up a strawman - I'll leave it up to others to see if there's something harder underneath - if you hold this action - "I think there will be no more evidence, and I am not very confident either way, so I will discard the output" to be the rightful action to take, how do you prevent yourself from getting boiled like a frog in a pan (yes, that's a false story - still, I intend the metaphorical meaning: how do you stop yourself from discarding every bit of evidence that comes your way, because you "know" there to be no more evidence?)

In my opinion, to do as you say weakens or even destroys the gradual "update" mechanism. This leads to less effective beliefs, and thus is irrational.


Were we to now look at the 3 questions, I'd answer..

Again, Eve is irrational because she says it cannot be falsified. If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational.

The second question, yes via this specific example. Here it can, thus it can.

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing. As in, gee, a couple thousand years ago, something truly fantastic happened - it was God! I saw it with my own eyes! You can keep believing there was, at that point in time, an entity causing this fantastic thing. Until you get other evidence, which may never happen. What you CANNOT do is say, "hey, maybe this 'God' that caused this one fantastic thing is also responsible for creating the universe and making my neighbor win the lottery and my aunt get cancer and ..." That's unloading a huge complexity on an earlier belief without paying appropriate penalties.

You don't only need evidence that the fantastical events were caused, you also need evidence they were caused by the same thing if you wish to attribute them to that same thing.

Comment author: Arielgenesis 27 July 2016 03:24:57AM 0 points [-]

Thank you for the reply.

My personal answer to the 3 questions is 3 yes. But I am not confident of my own reasoning, that's why I'm here, looking for confirmation. So, thank you for the confirmation.

If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational

That is exactly what I meant her to say. I just thought I could simplify it, but apparently I lose important points along the way.

Yes, it can be extended to belief in God. Provided we restrict "God" to a REALLY TINY thing.

I am a theist, but I am appalled by the lack of rational apologetic, the abundance of poor ones, and the disinterest to develop a good one. So here I am, making baby steps.

Comment author: MrMind 26 July 2016 07:28:24AM *  3 points [-]

In a Bayesian framework, the one and only way to make a belief unfalsifiable is to put its probability at 1.
Indeed, since Bayesian update is at the root about logics and not about physics: even if you don't have any technological mean whatsoever to recover an evidence, and will never have, if it's logically possible to falsify a theory, then it's falsifiable.
On the other side, once a belief acquires a probability of 1, then it's set to true in the model and later no amount of evidence can change this status.
Unfortunately for your example, it means that unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.

Comment author: Arielgenesis 27 July 2016 03:10:34AM *  0 points [-]

unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.

That is a very novel concept for me. I understand what you are trying to say, but I am struggling to see if it is true.

Can you give me few examples where something is "physically unfalsifiable" but "logically falsifiable" and the distinction is of great import?

Comment author: Dagon 25 July 2016 09:20:11PM 1 point [-]

I'm still deeply troubled by the focus on labels "rational" and now "Bayesian", rather than "winning", "predicting", or "correct".

For epistemic rationality, focus on truth rather than rationality: do these beliefs map to actual contingent states of the universe? Especially for human-granularity beliefs, Bayesian reasoning is really difficult, because it's unlikely for you to know your priors in any precise way.

For instrumental rationality, focus on decisions: are the actions I'm taking based on these beliefs likely to improve my future experiences?

Comment author: Arielgenesis 27 July 2016 03:07:05AM *  0 points [-]

human-granularity

I don't understand what does it mean, even after a google search, so please enlighten me.

For epistemic rationality

I think so. I think she has exhausted all the possible avenue to reach the truth. So she is epistemically rational. Do you agree?

For instrumental rationality

Now this is confusing to me as well. Let us forget about the extension for the moment and focus solely on the narrative as presented in the OP. I am not familiar how does value and rationality goes together, but, I think there is nothing wrong if her value is "Adam's innocence" and that it is inherently valuable, and end to it self. Am my making any mistake in my train of thought?

Comment author: Lumifer 25 July 2016 07:29:49PM 2 points [-]

I am trying to make a situation where a belief is (1) unfalsified, (2) unfalsifiable, and (3) has a lack of evidence.

Would Russell's teapot qualify? If you want to make it unfalsifiable, you you can move it to another galaxy and specify that the statement is true in a narrow time frame, say, for the next five minutes.

Comment author: Arielgenesis 27 July 2016 02:36:04AM 0 points [-]

Would Russell's teapot qualify

Yes exactly! The issue with that is the irrelevance of it. It is of no great import to anyone (except the teapot church, which I think is a bad satire of religion. The amount of suspension of disbelieve the narrative require is beyond me). On the other hand, Adam's innocence is relevant, meaningful and important to Eve (I hope this is obvious from the narrative).

Moreover, since people are assumed to be innocent until proven guilty, in the eye of many laws, the burden of proof argument from Russell's teapot is not applicable here.

In this twist of Russell's teapot, I think it is rational for Eve to maintain her belief. And that her belief is relevant and the burden of proof is not upon her. And by extension, this argument could be used by theist. But I know that my reasoning is not impeccable, so here I am Less Wrong.

Comment author: g_pepper 25 July 2016 06:54:14PM 1 point [-]

The idea of the story is that there are no evidence.

But in the OP, you said:

she has known Adam very well and the Adam that she knew, wouldn't commit murder. She uses Adam's character and her personal relationship with him as evidence.

It seems to me that Adam's character as observed by Eve is evidence. Not irrefutable evidence, but evidence all the same. It seems to me that, baring evidence of Adam's guilt or evidence that Adam's character had recently changed, Eve is rational for beleiving Adam to be innocent on the basis of that evidence.

Cain provided no such evidence, so Eve is rational in her belief.

Comment author: Arielgenesis 27 July 2016 02:21:52AM 1 point [-]

is evidence. Not irrefutable evidence

Yes, that's exactly what I had in mind.

The idea of the story is that there are no evidence.

What I meant was that there are no possibility of new evidence.

I also think that Eve is rational. But I'm not sure if I am correct. Thank you for the confirmation.

Comment author: MrMind 25 July 2016 07:22:08AM 9 points [-]

The fact that Adam committed a crime is not unfalsifiable, it's simply unfalsfied. There's just not enough probability weight for her to change her mind, she even admitted that with evidence strong enough she would otherwise change her mind.
Eve is being rational in retaining her current prior in the lack of evidence: it's not that she is assigning 0 to the probability of Adam being the killer, it's just that in the face of uncertainty there's no reason to update.
On the other hand I don't see how you could do this to uphold the belief in God: absence of evidence is evidence of absence.

Comment author: Arielgenesis 25 July 2016 05:41:39PM 0 points [-]

not unfalsifiable, it's simply unfalsfied

I am trying to make a situation where a belief is (1) unfalsified, (2) unfalsifiable, and (3) has a lack of evidence. How should I change the story such that all 3 conditions are fulfilled. And in that case, would then Eve be irrational?

Comment author: Pimgd 25 July 2016 10:26:39AM 1 point [-]

Eve is irrational. But that's because she has suddenly forgotten her earlier statements. "show me the video recording, then I would believe". If there is evidence, then the belief could be falsified. That's what Eve should have said.

Comment author: Arielgenesis 25 July 2016 05:29:30PM 0 points [-]

The idea of the story is that there are no evidence. Because I think, in real life, sometimes, there are important and relevant things with no evidence. In this case, Adam's innocence is important and relevant to Eve (for emotional and social reasons I presume), but there is no, and there will never be, evidence. Given that, saying: "If there is evidence, then the belief could be falsified." is a kind of cheating because producing new evidence is not possible anymore.

View more: Prev | Next