All of Arielgenesis's Comments + Replies

Second language might still be necessary for the cognitive development effect.

Given the current status quo, it is impossible. However, I can imagine the political world developing into an atmosphere where Esperanto might be made the lingua franca. Imagine that American and British power continues to decline, and Russia and China and German, and maybe India, become more influential, leading to a new status quo, a stalemate. Given sufficiently long stalemate, like decades, Esperanto might once again become a politically viable situation.

1gjm
Well, anything's possible. But I'm struggling to imagine a halfway-plausible scenario in which this actually happens. In the situation you describe, what's the actual mechanism by which Esperanto becomes widely used? I mean, let's say we have a bunch of roughly equal Great Powers (perhaps they're the Trump States, the Islamic Caliphate, the United States of Europe, China and Russia, with favoured languages The Best English, Arabic, German, Mandarin and Russian). Within each power's sphere of influence its favoured language (or languages) will be dominant. So now imagine someone in, say, the Trump States. Obviously they need to know The Best English. They might want to learn Spanish in case their military service is at the Wall; or Russian, of course. But what's going to make Esperanto more useful to them than those? Are you thinking that Esperanto might be imposed as a lingua franca? That there'd be some sort of international treaty where all these mutually-mistrustful Powers agree that they will use Esperanto as a second language, or for negotiations, or something? Why would any of them do that?

Are people here is interested in having a universal language, and have strong opinions on esperanto?

2Viliam
I speak Esperanto fluently, and I really wish it could replace English as a standard communication language. But I see it as a coordination problem that is almost impossible to solve. Learning English as an international language seems like an insane waste of resources. Why not use a language you could learn 10x faster? But the trick is that the costs are not same for everyone. Specifically, for native English speakers, Esperanto would be more costly than simply using the language they already speak fluently. And because the international language is chosen by people who have most economical power, of course their preferences are going to have greater impact. (And the same thing would happen if e.g. 20 years later English is replaced by Chinese. Then again, everyone except for Chinese would have a reason to prefer Esperanto, but the Chinese wouldn't care, so the rest of the world would have to learn Chinese.) Even a hypothetical situation where e.g. four languages with most economical power would be perfectly balanced, wouldn't necessarily mean that people would adopt Esperanto (or any other neutral language). Most speakers of these four languages would have little to gain by learning another language, so they wouldn't bother. And for the speakers of smaller languages it would be more profitable to learn one of the four languages (the specific choice depending on their geographical and political situation). Essentially, most people don't even want to communicate internationally. They mostly learn a foreign language if they believe it will help their careers. Which usually means they learn a language of an economically more powerful group. But that means that the other side doesn't have an incentive to learn a foreign language. The few hobbyists don't have enough purchasing power to matter on the large scale. It would have to be a completely fragmented world, where almost every city would speak a different language, that would create a strong need for a neutral l
6gjm
I think it might be good to have a universal language, but I think it's vanishingly unlikely that Esperanto or any other deliberately manufactured language will become one. The way languages get (anything like) universal is by being widely used, and the way languages get widely used is by being widely useful. I don't see any plausible way for something like Esperanto to achieve that. English might become a universal language. Maybe, depending on how the world goes over the next few decades, Chinese or Russian or something. But it won't be Esperanto. Pretty much everyone whose knowledge of Esperanto would make learning Esperanto valuable already speaks English.

I just thought of this 'cute' question and not sure how to answer it.

The sample space of an empirical statement is True or False. Then, given an empirical statement, one would then assign a certain prior probability 0<p<1 to TRUE and one minus that to FALSE. One would not assign a p=1 or p=0 because it wouldn't allow believe updating.

For example: Santa Claus is real.

I suppose most people in LW will assign a very small p to that statement, but not zero. Now my question is, what is the prior probability value for the following statement:

Prior probability cannot be set to 1.

3ChristianKl
Prior probability cannot be set to 1. is itself not an empiric statement. It's a question about modelling.
3Gram_Stone
Actual numbers are never easy to come up with in situations like these, but some of the uncertainty is in whether or not priors of zero or one are bad, and some of it's in the logical consequences of Bayes' Theorem with priors of zero or one. The first component doesn't seem especially different from other kinds of moral uncertainty, and the second component doesn't seem especially different from other kinds of uncertainty about intuitively obvious mathematical facts, like that described in How to Convince Me That 2 + 2 = 3.

Thank you. This reply actually answer the first part of my question.

The 'working' presuppositions include:

  • Induction
  • Occam's razor

I will quote most important part from Fundamental Doubts

So, in the end, I think we must allow the use of brains to think about thinking; and the use of evolved brains to think about evolution; and the use of inductive brains to think about induction; and the use of brains with an Occam prior to think about whether the universe appears to be simple; for these things we really cannot unwind entirely, even when we have reason

... (read more)

I will have to copy paste my answer to your other comment:

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

Am I not allowed to use such narrative technique to simplify my story and deliver my point? Yes I know it is out of touch with the human condition but I was hoping it would not strain my audiences' suspension of disbelieve.

2buybuydandavis
The problem is that the unrealistic simplification acts precisely on the factor you're trying to analyze - falsifiability. If you relax the unrealistic assumption, the point you're trying to make about falsifiabilty no longer holds.

genuine marital relationship

"If Adam is guilty, then the relationship was not genuine." Am I on the right track? or did I misunderstood your question?

0Dagon
That just moves it up a level. If she is rational, she'll say "if our relationship was genuine, I want to believe it was genuine. If our relationship was not genuine, I want to believe it was not genuine". The OP and most of the discussion has missed the fundamental premise of rationality: truth-seeking. The question is not "is Eve rational", but "is Eve's belief (including acknowledgement of uncertainty) correct"?

Why are you a theist?

This is very poorly formulated. But there are 2 foundations in my logic. First is, that I am leaning towards presuppositionalism (https://en.wikipedia.org/wiki/Presuppositional_apologetics). The only way to build a 'map', first of all, is to take a list of presuppositions for granted. I am also interested in that (see my post on http://lesswrong.com/lw/nsm/open_thread_jul_25_jul_31_2016/). The idea is that a school could have a non-contradicting collection of self-referential statement that covers the epistemology and axiology and a... (read more)

We needn't presume that we are not in a simulation, we can evaluate the evidence for it.

How do we not fall into the rabbit hole of finding evidence that we are not in a simulation?

1Riothamus
There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop. Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusions, with confidence distributed among them. The second is the notion of paying rent, which is super handy for setting priorities. In summary, if it does not yield a new expectation, it probably does not merit consideration. If this does not seem sufficiently coherent, consider that you are allowed to be inconsistent, and also that you are engaging with rationality early in its development.

why does she want to be correct (beyond "I like being right")?

I think that's it. "I like knowing that the person I love is innocent." Which implies that Adam is not lying to her and "I like being in healthy, fulfilling and genuine marital relationship"

0Dagon
That's a reason to want him to be innocent, not a reason to want to know the truth. What's her motivation for the necessary second part of the litany: "if Adam is guilty, I want to believe that Adam is guilty"?

I see... I have been using unfalsifiability and lack of evidence as a synonym. The title should have read: a rational believe without evidence

Thank You.

1MrMind
That's a difficult one to achieve. Rationality is about how to process evidence to change one's prior, it has very little to say about what belief you start with, besides the fact that it must be expressible with classical logic. To complicate the matter, Bayesian evidence works in such a way that if you classify something as evidence, then it means that its absence will lower the probability of the assertion it is supporting. To have a belief that is both rational and unsupported, you must start with a model that is at one time compatible with background information, whose support is difficult to obtain and is a better fit than competing models, who might even have easier to obtain evidence. A tough challenge!

God is a messy concept. As a theist, I am leaning more towards the Calvinistic Christianity. Defining God is very problematic because, by definition, it is something, which in it's fullness, is beyond human comprehension.

Could you clarify?

Since ancient time, there are many arguments for and against God (and the many versions of it). Lately, the arguments against God has developed to a very sophisticated extend and the theist is lagging very far behind and there doesn't seem to be any interest in catching up.

1[anonymous]
It is a very interesting quest you have taken on. As an atheist, I am always interested in hearing good arguments in favour of God. Why don't you start by answering: Why are you a theist? You have looked at all the evidence available to you, and arrived at a posterior where P(God exists) >> P(God does not exist). Explain your reasoning to us. If your reasoning is good enough for you, why would it not be good enough for me?
0Pimgd
Which is why I use labels such as "an entity" which may or may not be "omniscient" or "omnipotent". You can describe God in terms of labels; If I had a car, and had to describe it, I could say parts of it were made from leather, parts of it were made from metals, parts of it were made from rubber, looking at it gives a grey sensation, but there is also red and white and black... If God really can do anything and everything then everything is evidence of and evidence against God and you have 0 reason to update on any of the beliefs surrounding God. Which is, once again, why you don't tie 100% probability to things. That includes statements of the nature "God caused this".

Well... That's part of the story. I'm sure there is a term for it, but I don't know what. Something that the story gives and you accept it as fact.

1buybuydandavis
That kind of knowledge is not part of the human condition. By making it a presupposition of your story, you render your hypothetical inapplicable to actual human life.

you can make a more sciency argument with recent cosmological theories

Yes I could. I chose not to. It is a balance between suspension of disbelieve and narrative simplicity. Moreover, I am not sure how much credence should I put on recent cosmological theories that they will not be updated the future, making my narrative set up obsolete. I also do not want to burden my reader with familiarity of cosmological theories.

This, and your links to Lob's theory, is one of the most fear inducing piece of writing that I have ever read. Now I want to know if I have understand this properly. I found that the best way to do it is to first explain what I understand to myself, and then to other people. My explanation is below:

I suppose that rationalist would have some simple, intuitive and obvious presumptions a foundation (e.g. most of the time, my sensory organs reflect the world accurately). But apparently, it put its foundation on a very specific set of statement, the most power... (read more)

1ChristianKl
That assumes that a rational person is one who holds beliefs because of a chain of logic. Empricially Superforcasters don't simply try to follow a chain of logic to get their beliefs. A rational person in the LW sense thus is not one that holds beliefs because of a chain of logic. Tedlock gives in his book a good outlook about how to form beliefs about the likelihood that beliefs are true.
4WhySpace_duplicate0.9261692129075527
Very close, but not quite. (Or, at least not quite my understanding. I haven’t dug too deep.) A reply to Presuppositionalism I wouldn’t say that we should presume anything because it proves itself. Emotionally, we may have a general impulse to accept things because of evidence, and so it is natural to accept induction using inductive reasoning. So, that’s likely why the vast majority of people actually accept some form of induction. However, this is not self-consistent, according to Lob’s theorem. We must either accept induction without being able to make a principled argument for doing so, or we must reject it, also without a principled reason. So, Presuppositionalism appears to be logically false, according to Lob’s theorem. I could leave it at that, but it’s bad form to fight a straw man, and not the strongest possible form of an argument. The steel man of Presuppositionalism might instead take certain propositions as a matter of faith, and make no attempt to prove them. One might then build much more complex philosophies on top of those assumptions. Brief detour Before I reply to that, let me back up for a moment. I Agree Denotationally But Object Connotationally with most of the rest of what you said above. (It seems to me to be technically true, but phrased in such a way that it would be natural to draw false inferences from it.) If I had merely posited that induction was valid, I suspect it wouldn’t have been disconcerting, even if I didn’t offer any explanation as to why we should start there and not at “I am not dreaming” or any of the examples you listed. You were happy to accept some starting place, so long as it felt reasonable. All I did was add a little rigor to the concept of a starting point. However, by additionally pointing out the problems with asserting anything from scratch, I’ve weakened my own case, albeit for the larger goal of epistemic rationality. But since all useful philosophies must be based in something, they also can’t prove t
1stoat
Eliezer ruminates on foundations and wrestles with the difficulties quite a bit in the Metaethics sequence, for example: * Where Recursive Justification Hits Bottom * Fundamental Doubts

What are rationalist presumptions?

I am new to this rationality and Bayesian ways of thinking. I am reading the sequence, but I have few questions along the way. These questions is from the first article (http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/)

Epistemic rationality

I suppose we do presume things, like we are not dreaming/under global and permanent illusion by a demon/a brain in a vat/in a Truman show/in a matrix. And, sufficiently frequently, you mean what I think you meant. I am wondering, if there is a list of things that rationalis... (read more)

0ChristianKl
No, if you look at our yearly census you find that it lists a question for the probability that we are living in a simulation. Most people don't presume that this probability is zero but enter numbers different from zero if my memory is right.
2Bound_up
Okay, I don't know why everyone is making this so complicated. In theory, nothing is presupposed. We aren't certain of anything and never will be. In practice, if induction works for you (it will) then use it! Once it's just a question of practicality, try anything you like, and use what works. It won't let you be certain, but it'll let you move with power within the world. As for values, morals, your question suggests you might be interested in A Thousand Shards of Desire in the sequences. We value what we do, with lots of similarities to each other, because evolution designed our psychology that way. Evolution is messy and uncoordinated. We ended up with a lump of half random values not at all coherent. So, we don't look for, or recommend looking for, any One Great Guiding Principle of morality; there probably isn't one. We just care about life and fairness and happiness and fun and freedom and stuff like anyone else. Lots of lw people get a lot of mileage out of consequentialism, utilitarianism, and particularly preference utilitarianism. But these are not presumed. Morality is, more or less, just a pile of things that humans value. You don't HAVE to prove it to get people to try to be happy or to like freedom (all else equal). If I've erred here, I would much like to know. I puzzled over these questions myself and thought I understood them.

Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).

Rationalists often fail to compartmentalize, even when it would be highly useful.

Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)

Rationalists don't even lift bro.

Rationalists often fail ... (read more)

4WhySpace_duplicate0.9261692129075527
Others have given very practical answers, but it sounds to me like you are trying to ground your philosophy in something more concrete than practical advice, and so you might want a more ivory-tower sort of answer. In theory, it's best not to assign anything 100% certainty, because it's impossible to update such a belief if it turns out not to be true. As a consequence, we don't really have a set of absolutely stable axioms from which to derive everything else. Even "I think therefore I am" makes certain assumptions. Worse, it's mathematically provable (via Löb's Theorem) that no system of logic can prove it's own validity. It's not just that we haven't found the right axioms yet; it's that it is physically impossible for any axioms to be able to prove that they are valid. We can't just use induction to prove that induction is valid. I'm not aware of this being discussed on LW before, but how can anyone function without induction? We couldn't conclude that anything would happen again, just because it had worked a million times before. Why should I listen to my impulse to breathe, just because it seems like it's been a good idea the past thousand times? If induction isn't valid, then I have no reason to believe that the next breath won't kill me instead. Why should I favor certain patterns of twitching my muscles over others, without inductive reasoning? How would I even conclude that persistent patterns in the universe like "muscles" or concepts like "twitching" existed? Without induction, we'd literally have zero knowledge of anything. So, if you are looking for a fundamental rationalist presumption from which to build everything else, it's induction. Once we decide to live with that, induction lets us accept fundamental mathematical truths like 1+1=2, and build up a full metaphysics and epistemology from there. This takes a lot of bootstrapping, by improving on imperfect mathematical tools, but appears possible. (How, you ask? By listing a bunch of theorems w
1Gyrodiot
Hi Arielgenesis, and welcome! From a rationalist perspective, taking things for granted is both dangerous and extremely useful. We want to preserve our ability to change our minds about things in the right direction (closer to truth) whenever the opportunity arises. That being said, we cannot afford to doubt everything, as updating our beliefs takes time and resources. So there are things we take for granted. Most mathematics, physics, the basic laws and phenomena of Science in general. Those are ideally backed by the scientific method, which axioms are grounded in building a useful model of the world (see Making Beliefs Pay Rent (in Anticipated Experiences) ). From my rationalist perspective, then, there are no self-evident things, but there are obvious things, considered evident by the overwhelming weight of available... evidence. Regarding values... it's a tough problem. I personally find that all preconceptions I had about universally shared values are shattered one by one the more I study them. For more information on this, I shall redirect you to complexity of value.
0Riothamus
Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real. As a corollary, things that have no evidence do not merit belief. We needn't presume that we are not in a simulation, we can evaluate the evidence for it. The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.

Thank you for the reply.

My personal answer to the 3 questions is 3 yes. But I am not confident of my own reasoning, that's why I'm here, looking for confirmation. So, thank you for the confirmation.

If we let Eve say "I still think he didn't do it because of his character, and I will keep believing this until I see evidence to the contrary - and if such evidence doesn't exist, I will keep believing this forever" - then yes, Eve is rational

That is exactly what I meant her to say. I just thought I could simplify it, but apparently I lose importa... (read more)

1Pimgd
The point is that these days... and I think in the days before that, AND the days before that... ... Okay, so basically since forever, "God" has been such a loaded concept... If you ask people where God is, some of them will tell you that "God is in everything and anything" (or something to that tune). Now, these people don't have to be right (or wrong!) but that's ... a rather broad definition to me. One can imagine God as an entity. Like, I dunno, a space alien from an alternative universe (don't ask how that universe was created; I don't know, this is a story and not an explanation). With super advanced technology. So if we then ask "did God create the world" and we (somehow...?) went back in time and saw that, hey, this space alien was somewhere else at the time and, no, the planet formed via other means, then you'd have a definitive answer to that question. But there are other definitions. God are the mechanics of the universe. So, what you'd call the laws of physics, no, that's just God. That's how God keeps everything going. Why, then, yes, God did create the world! But only because current scientific understanding says "we think physics did it" and then you say "Physics is God". Anyway, if you want a sane, useful, rational answer to your third question then you must define God. I personally treated God as 1 entity in my earlier answer, which leads to the problem of having to connect events to the same entity (which, when you know very little about that entity, is pretty hard). (If you didn't connect events to that same entity then something else must have caused it, in which case you have multiple probable causes for fantastic events, and you might as well call them Gods individually?) ---------------------------------------- I don't quite grasp what you mean with the last bit... Could you clarify?

unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.

That is a very novel concept for me. I understand what you are trying to say, but I am struggling to see if it is true.

Can you give me few examples where something is "physically unfalsifiable" but "logically falsifiable" and the distinction is of great import?

2MrMind
It's a straightforward corollary of Bayes theorem: if P(A) = 1 (or P(A) = 0), no amount of later updating can change this value. No matter what strong contrary evidence is presented. This is indeed a simple model of a hardcore theist: he has already set P(god(s)) to true, so he is willing to dig himself a hole of unlimited depth to account for the evidence that oppose the existence of a divinity. As for some example, Russel's teapot is a good choice: a teapot orbiting a distant sun in other galaxy. Is it falsifiable? With our current and future technology, probably not. Is it logically falsifiable: yes! Even if you assign a very low probability to its existence, an alien species could just transport us there and show us that there's such a teapot. On the other hand, as I mentioned earlier, if we had put P(teapot) = 0, then we will never accept the teapot existence, even in the face of space travelling aliens that show us that the thing is actually there.

human-granularity

I don't understand what does it mean, even after a google search, so please enlighten me.

For epistemic rationality

I think so. I think she has exhausted all the possible avenue to reach the truth. So she is epistemically rational. Do you agree?

For instrumental rationality

Now this is confusing to me as well. Let us forget about the extension for the moment and focus solely on the narrative as presented in the OP. I am not familiar how does value and rationality goes together, but, I think there is nothing wrong if her value is &qu... (read more)

1Dagon
By human-granularity, I mean beliefs about macro states that can be analyzed and manipulated by human thought and expressed in reasonable amounts (say, less than a few hundred pages of text) of human language. As contrasted with pure analytic beliefs about the state of the universe expressed numerically. For instrumental rationality, what goals are furthered by her knowing the truth of this fact? Presuming that if Adam is innocent, she wants to believe that Adam is innocent and if Adam is guilty, she wants to believe Adam is guilty, why does she want to be correct (beyond "I like being right")? What decision will she make based on it?

Would Russell's teapot qualify

Yes exactly! The issue with that is the irrelevance of it. It is of no great import to anyone (except the teapot church, which I think is a bad satire of religion. The amount of suspension of disbelieve the narrative require is beyond me). On the other hand, Adam's innocence is relevant, meaningful and important to Eve (I hope this is obvious from the narrative).

Moreover, since people are assumed to be innocent until proven guilty, in the eye of many laws, the burden of proof argument from Russell's teapot is not applicable... (read more)

is evidence. Not irrefutable evidence

Yes, that's exactly what I had in mind.

The idea of the story is that there are no evidence.

What I meant was that there are no possibility of new evidence.

I also think that Eve is rational. But I'm not sure if I am correct. Thank you for the confirmation.

not unfalsifiable, it's simply unfalsfied

I am trying to make a situation where a belief is (1) unfalsified, (2) unfalsifiable, and (3) has a lack of evidence. How should I change the story such that all 3 conditions are fulfilled. And in that case, would then Eve be irrational?

4MrMind
In a Bayesian framework, the one and only way to make a belief unfalsifiable is to put its probability at 1. Indeed, since Bayesian update is at the root about logics and not about physics: even if you don't have any technological mean whatsoever to recover an evidence, and will never have, if it's logically possible to falsify a theory, then it's falsifiable. On the other side, once a belief acquires a probability of 1, then it's set to true in the model and later no amount of evidence can change this status. Unfortunately for your example, it means that unfalsifiability and lack of evidence, even an extreme one, are orthogonal concern.
2Lumifer
Would Russell's teapot qualify? If you want to make it unfalsifiable, you you can move it to another galaxy and specify that the statement is true in a narrow time frame, say, for the next five minutes.

The idea of the story is that there are no evidence. Because I think, in real life, sometimes, there are important and relevant things with no evidence. In this case, Adam's innocence is important and relevant to Eve (for emotional and social reasons I presume), but there is no, and there will never be, evidence. Given that, saying: "If there is evidence, then the belief could be falsified." is a kind of cheating because producing new evidence is not possible anymore.

0buybuydandavis
How do you claim to know that?
1Pimgd
Okay... So, say it turns out that, well, Eve is irrational. Somehow. Now what? Do we go "neener-neener" at her? What's the point? What's the use that you could get out of labeling this behavior irrational? Suppose Adam dies and is cryo-frozen. During Eve's life, there will be no resuscitation of Adam. Sometime afterward, however, Omega will arrive, deem the problem interesting and simulate Adam via really really really advanced technology. Turns out he didn't do it. Is she now rational because, well, turns out she was right after all? Well, no, because getting the right answer for the wrong reasons is not the rational way to go about things (in general, it might help in specific cases if you need to get the answer right but don't care how). .... Actually, let me just skip over a few paragraphs I was going to write and skip to the end. You cannot have 100% confidence interval. Because then your belief is set in stone and it cannot change. You can have a googleplex nines if you want, but not 100% confidence. Fallacy of argument from probability (if it can happen then it must happen) aside; How is it rational to discard a belief you are holding on shaky evidence if you think with near absolute certainty that no more evidence will arrive, ever? What will you do when there is more evidence? (Hint: Meeting Adam's mother at the funeral and hearing childhood stories about what a nice kid he was is more evidence for his character, albeit very weak evidence - and so are studies that show that certain demographics of the timeperiod that Adam lived in had certain characteristics) You gotta update! (I don't think that fallacy I mentioned applies; if it does, we can fix it with big numbers; if you are to hold this belief everywhere, then... the probabilities go up as it turns from "in this situation" to "in at least one of all these situations") So to toss a belief aside because you think there will be no more evidence is the wrong action to me. You can park a belief. T
1g_pepper
But in the OP, you said: It seems to me that Adam's character as observed by Eve is evidence. Not irrefutable evidence, but evidence all the same. It seems to me that, baring evidence of Adam's guilt or evidence that Adam's character had recently changed, Eve is rational for beleiving Adam to be innocent on the basis of that evidence. Cain provided no such evidence, so Eve is rational in her belief.

Thank you, that was a very nice extension to the story. I should have included the scenario to make her belief relevant. I agree with you, assigning 100% probability is irrational in her case. But, if she is not rationally literate enough to express herself in fuzzy, non-binary way, I think she would maintain rationality through saying "Ceteris paribus, I prefer to be not locked in the same room with Cain because I believe he is a murder because I believe Adam was innocent" (ignoring ad hominem)

I was under the impression that the golden standard for rationality is falsifiability. However, I now understand that Eve is rational despite unfalsifiablity, because she remained Bayesian.

1Dagon
I'm still deeply troubled by the focus on labels "rational" and now "Bayesian", rather than "winning", "predicting", or "correct". For epistemic rationality, focus on truth rather than rationality: do these beliefs map to actual contingent states of the universe? Especially for human-granularity beliefs, Bayesian reasoning is really difficult, because it's unlikely for you to know your priors in any precise way. For instrumental rationality, focus on decisions: are the actions I'm taking based on these beliefs likely to improve my future experiences?

What if we were to take one step back and Adam didn't die. Eve claims that her believe pays rent because it could be falsified if Adam changed in character. In this scenario, I suppose that you would agree to say that Eve is still rational.

Now, I cannot formulate my arguments properly at the moment, but I think it is weird that Adam's death make Eve's belief irrational, as per:

So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further expe

... (read more)
2Dagon
I think you're focusing too much on the label "rational", and not enough on the actual effect of beliefs. I'll admit I'm closer to logical positivism than is Eliezer, but even if you make the argument (which you haven't) that the model of the universe is simpler (in the Kolmogorov complexity sense) by believing Adam killed Able, it's still not important. Unless you're making predictions and taking actions based on a belief (or on beliefs influenced by that belief), it's neither rational nor irrational, it's irrelevant. Now, a somewhat more complicated example, where Eve has to judge Cain's likelihood of murdering her, and thinks the circumstances of the locked room in the past are relevant to her future, there are definite predictions she should be making. Her confidence in Adam's innocence implies Cain's guilt, and she should be concerned. It's still the case that she cannot possibly have enough evidence for her confidence to be 1.00.

Thank your the link. I just read the article. It is exactly what I had mind, but my mind works better with narrative.

What I am wondering is if a theist could use this as a foundation of their arguments and remain rational.

Thank you, that is very helpful. I wish it is said in the FAQ, or I could have missed it. I would have upvoted you if I could.

Hi, I have silly question. How do I vote? It seems obvious but I cannot see any upvote or downvote button anywhere in this page. I have tried:

  1. looking at the top of the comment. Next to OP/TS is date, and then time, and then the points. At the far right is the 'minimize'
  2. looking at the bottom of the comment. I see Parent, Edit, Permalink, get notification
  3. The FAQ says: >you can vote submissions and comments up or down just like you can on Reddit but I cannot find the vote button anywhere near comments or post.
3ignoranceprior
You need at least 10 karma points to vote (you currently have 2 points, according to your profile). Once you have 10 points you should be able to see the voting buttons. Incidentally, after a troll downvoted me from 12 to 4, I lost the ability to vote, and now I can no longer see the buttons.

Post-high education LWers, do you think the place you studied at had a significant effect on your future prospects?

I went to Melbourne University and did an exchange program to UCSD. So I have comparison. I think the distribution of the quality of teaching is sufficiently narrow that it should not play a major factor..

There are careers like politics where personal connection that are gathered during university years are very important.

Depending on the job and your part of the world, personal connection might be a very important factor in carer success. It is more likely that you will would gain more, better personal connection in better university.

I bought a $1400 mattress in my quest for sleep, over the Internet hence much cheaper than the mattress I tried in the store, but non-returnable. When the new mattress didn’t seem to work too well once I actually tried sleeping nights on it, this was making me reluctant to spend even more money trying another mattress. I reminded myself that the $1400 was a sunk cost rather than a future consequence, and didn’t change the importance and scope of future better sleep at stake (occurring once per day and a large effect size each day).

from http://rationalit... (read more)

2gjm
That's a question of psychology, not of rationality. I don't know the answer, though my prejudices say it probably isn't a great idea. But there's another reason why you might choose not to buy another in that situation: you may think it less likely than you did before that any given other mattress will solve your sleep problems -- so now the deal you're considering isn't "$1400 for better sleep" but "$1400 for one more attempt at better sleep that may well fail like the last one did". (That was really the deal you were considering all along, but you didn't know it then.) Also, now you're $1400 poorer. If that's a sizeable fraction of your wealth then $1400 is worth more to you now than it was before and that may very reasonably affect your willingness to spend that much on a mattress. If it's not a sizeable fraction of your wealth, it may still be a sizeable fraction of your readily accessible wealth (the rest being tied up in pension schemes, investments you can't liquidate very quickly, accounts or investments you could liquidate quickly but have a policy of not doing because otherwise you'd spend too much, future earnings[1], etc.) so you may not want to spend that much again soon. [1] Of course, if you consider something like the estimated net present value of your future earnings as part of your wealth it's much less likely that $1400 is a sizeable fraction of your total wealth thus reckoned. So this may be a case where letting the sunk cost fallacy do its thing produces a better decision than trying to ignore sunk costs, if you aren't very careful about it. I suspect there are quite a lot of such situations. Cognitive biases sometimes yield better approximations to optimal rationality than simple attempts at explicit unbiased reasoning...
1ChristianKl
The key question is: Why do you believe that this attempt at punishing will have this effect? Is that a theory that you came up with yourself as a person who isn't an expert in psychology and who hasn't read any of the relevant research or alternatively has done enough self experiments to have empiric data about how this effect will work on them?
-1entirelyuseless
Or, they could simply update on the fact that it did not work when they expected it to, and rightly conclude that it was more likely than they realized that spending even more money would not be worthwhile.

A dollar feels more important than it actually is, so people treat the bets seriously even though they are not very serious.

Although there is a weight in the dollar, I think there is also another reason why people take it more seriously. People adjust their believe according to what other people believe and their confidence level. Therefore, when you propose a bet, even only for a dollar, you are showing a high confidence level and this decrease their confidence level. As a result, system 2 kicks in and they will be > [forced] to evaluate honestly.

To the best of my knowledge, human brain is a simulation machine. It unconsciously making prediction about what sensory input it should expect. This include the higher level input, like language and even concepts. This is the basic mechanism underlying surprise and similar emotion. Moreover, it only makes simulation on the things it cares about and filter the rest.

Given this, I would think that most of your prediction is obsolete, because we are doing this unconsciously. Example:

  1. You predict you will finish the task one week early. But you are ended up fi

... (read more)

I'm not sure how do you define concept. According to what I understood, I think you might be missing these:

Feed back https://en.wikipedia.org/wiki/Feedback the impact of something halts its cause.

feed forward https://en.wikipedia.org/wiki/Feed_forward_(control) the impact of something reinforce its cause

self fulfilling prophecy https://en.wikipedia.org/wiki/Self-fulfilling_prophecy a prophecy is being fulfilled because the prophecy was made, usually because active agents tried to prevent the prediction from happening

emergence https://en.wikipedia.org/wiki... (read more)

We'd love to know who you are, what you're doing: I was a high school teacher. Now I'm back to school for Honours and hopefully PhD in science (computational modelling) in Australia. I'm Chinese-Indonesian (my grammar and spelling are a mess) and I'm a theist (leaning toward Reformed Christianity).

what you value: Whatever is valuable.

how you came to identify as an aspiring rationalist or how you found us: My friend who is now a sister under the Fransiscan order of the Roman Catholic Church recommended me Harry Potter and the method of Rationality.

I think... (read more)