In response to comment by pianoforte611 on White Lies
Comment author: hyporational 08 February 2014 05:14:57AM *  2 points [-]

But the idea of someone I trust lying whenever its more convenient than telling the truth does not sit well with me. But perhaps I'm just being unreasonable.

I suggest you explore the concept of trust on a less binary basis. Trust makes no sense to me unless it has some kind of a rough probability estimate attached to it. Different truths have different probabilities and different moral weights.

In response to comment by hyporational on White Lies
Comment author: Carinthium 08 February 2014 05:31:26AM 2 points [-]

True, but it is also true that you can't somebody on certain matters if they are willing to tell you white lies. It's better to try and hang around more honest types so you can learn to cope with the truth better.

In response to White Lies
Comment author: Carinthium 08 February 2014 04:58:17AM 4 points [-]

I reject this idea for a fairly simple reason. I want to be in control of my own life and my own decisions, but due to lack of social skills I'm vulnerable to manipulation. Without a zero-tolerance policy on liars, I would rapidly be manipulated into losing what little control of my own life remains.

Comment author: fortyeridania 03 February 2014 06:25:12PM *  0 points [-]

You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability.

I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.

OK. Let me try to restate your argument in terms I can better understand. Tell me if I'm getting this right.

(1) Let A = any agent and P = any proposition

(2) Define "justified belief" such that A justifiably believes P iff the following conditions hold:

  • a. P is provable from assumptions a, b, c, ... and z.

  • b. A justifiably believes every a, b, c, ... and z.

  • c. A believes P because of its proof from a, b, c, ... and z.

(3) The claim "The sun will rise tomorrow" (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.

(4) Therefore, for every agent, belief in the claim "The sun will rise tomorrow" is not justified.

Is this a fair characterization of your argument? If so, I'll work from this. If not, please improve it.

Comment author: Carinthium 04 February 2014 01:01:58AM 0 points [-]

Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.

However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn't have any justification.

Comment author: HoverHell 02 February 2014 09:50:12AM 0 points [-]

“Better” / “preferable” / “utility” / … is necessary for “usefulness” e.g. “usefulness of this communication” (and also for decision-making).

By “not a binary” I mean the division is not into “rational” / “non-rational”, but into “more rational” / “less rational”; where “rational” is relevant to the aforementioned “better” (with regards to efficiency of optimization and also forms of communication).

… vaguely speaking.

Comment author: Carinthium 03 February 2014 08:57:50AM -1 points [-]

On thought, my response is that no circular argument can possibly be rational so the question of if rationality is binary is irrelevant. You are mostly right, though for some purposes rational/irrational is better considered as a binary.

Comment author: fortyeridania 03 February 2014 06:59:32AM 0 points [-]

my posistion [sic] all along has a distinction between faith and rational belief

OK, but you are not using the term "rational" in the (what I thought was) the standard way. So the only reason what you're saying seems contentious is because of your terminology.

You have not yet addressed much of what I've written. Automatically rejecting everything that isn't 100% proven is a poor strategy if the agent's goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you're using the word "rational," or do you actually recommend "Reject everything that isn't known 100%" as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already--that you would not recommend the skeptical strategy.)

How should an agent proceed, if she wants to have as accurate picture of reality as possible?

Comment author: Carinthium 03 February 2014 07:56:49AM 0 points [-]

You are the only who is making assumptions without evidence and ignoring what I'm saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.

You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it's existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.

In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.

The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.

Comment author: ChristianKl 02 February 2014 08:17:44AM 0 points [-]

I have no belief whatsoever [...] I have already shown I know what skepticism is

Those two positions contradict each other. You can't have both. You claim at the same time to believe that you know what skepticism happens to be and that you know nothing.

Comment author: Carinthium 02 February 2014 08:26:33AM 0 points [-]

I said earlier that I believe that rationally speaking, skepticism proves itself correct and ordinary ideas of rationalism prove themselves self-refuting. However, I believe on faith (in the religious sense) that skepticism is false, and have beliefs on faith accordingly.

Therefore, I sort of believe in a double truth, but in a coherent fashion.

Comment author: fortyeridania 02 February 2014 07:06:33AM 0 points [-]

I do not think anything I wrote above depends on using probability to discuss reality.

The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.

Please elaborate. I believe it is not only relevant, but decisive.

Comment author: Carinthium 02 February 2014 08:04:55AM 0 points [-]

You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.

You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.

Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.

Comment author: fortyeridania 02 February 2014 06:30:43AM 0 points [-]

what is useful to believe and what is true have no necessary correlation

You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below.

OK, so I think your labeling system, which is clearly different from the one to which I am accustomed, looks like this:

rationality = a set of rules which reliably and necessarily determine truth

and

X is irrational = X does not follow rationality

If that's how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable--just not with absolute certainty. She would not have a very accurate picture of the world.

And this is not just because of "pragmatics"; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs.

Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn't first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests.

Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long--possibly forever. And secondly (and this is the aspect I'm trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating).

Now, how about a person's epistemic diet--does it make sense, from a purely epistemic perspective, for an agent to believe only what she can prove with absolute certainty? No. For one thing, it would take way too long--possibly forever. And secondly, lots of true things cannot be proven so, at least not with the kind of transcendent certainty you seem to be talking about. So an agent who insisted on such a filter would end up blocking much truth, thus "learning" a highly distorted map.

If the agent is interested in truth, she should ditch that filter and find a standard that lets her accept more true correct claims about the world, even if they aren't totally proven.

By the way, have you read many of the Sequences? They are quite helpful and much better written than my comments. I'd say to start here. This one and this one also heavily impinge on our topic.

Comment author: Carinthium 02 February 2014 06:58:48AM 0 points [-]

This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.

The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.

Comment author: ChristianKl 01 February 2014 07:54:52PM 0 points [-]

Applying this concept to absolutely everything is effectively what skepticism is.

But you are not applying it to everything. You have a strong belief in a platonic ideal of rationality on which you base your concept.

Take the buddhists who actually don't attach themselves to mental concepts. They have sayings such as: "If you meet the Buddha on the road, kill him".

You are not willing that you don't know what skepticism happens to be because you have attachement to it. This is exactly what Wittengsteins sentence is about. We shouldn't talk about those concepts.

The buddhists also don't take in a rational sense about it. They meditate and have a bunch of koans but they are mystics. You just don't get to be a platonic idealist and no mystic and have skepticism be valid.

Comment author: Carinthium 02 February 2014 02:47:47AM 0 points [-]

Not exactly Platonic- I have no belief whatsoever, on faith or reason, in ideal forms. As for why rationalism, I believe in it because rationalist arguments in this sense can be inherently self-justifying. This comes from starting from no assumptions.

However, I then show that such rationality fails in the long run to skeptical arguments of it's own sort, just as other types of rationality do. I focus on it because it is the only one with a halfway credible answer to skepticism.

I have already shown I know what skepticism is- not knowing anything whatsoever. You haven't refuted this argument, given that "I don't know" is a valid Epistemic state.

Comment author: ChristianKl 28 January 2014 04:30:41PM *  -1 points [-]

I can also talk about weuisfdyhkj. It's a label. In itself not more meaningful than the label you use. You think that you know what the label means but if your brain can't simulate a reality behind the label it has no meaning. According to Wittgenstein we should therefore not speak about it.

Comment author: Carinthium 01 February 2014 03:03:09PM 0 points [-]

I think I know my answer to this- I've realised my definition of "rational" subtly differs from LessWrong's. When you see mine, you'll see this wasn't my fault.

A set of rules is rational, I would argue, if that set of rules by it's very nature must correlate with reality- if one applies those rules to the evidence, they must reveal what is true. Even if skepticism is false, then it is a mere coincidence that our assumptions the world is not an illusion, our memories are accurate etc happened to be correct as we had no rational rule that would show us that they were. We do not even have a role that we must rationally consider it probable.

One of the rules of such rationality is that pragmatism is explicitly ruled out. Pragmatic considerations have no necessary correlation with what is actually true, therefore they should not be considered in determining what is true. The consideration of whether human beings are or are not capable of believing something is a pragmatic consideration.

You claim that skepticism is incoherent. Firstly, this is circular as you assume things to get to this conclusion. Second, even if you take those assumptions humans are capable of understanding the concept of "I don't know". Applying this concept to absolutely everything is effectively what skepticism is.

View more: Prev | Next