All of Carinthium's Comments + Replies

A good question to keep in mind is how much real power the electorate has, as opposed to entrenched bureaucrats or de facto oligarchies.

Question. I admit I have a low EQ here, but I"m not sure if 4) is sarcasm or not. It would certainly make a lot of sense if "I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller." were sarcasm.

I would have said we had information on 2), but I've made so many wrong predictions about Donald Trump privately that I think my private opinion has lost all credibility there. 1) makes sense.

I can see why you might be afraid ... (read more)

0Sable
I was trying to be sincere with 4), although I admit that without tone of voice and body language, that's hard to communicate sometimes. And even if LW hasn't done as good a job as we could have with this topic, from what I've seen we've done far better than just about anyone not in the rationalist community at trying to remain rational. Glad you agree with 1); when I first heard that argument (I didn't come up with it), I had a massive moment of "that seems realy obvious, now that someone said it." With regards to 2), you're right that we do have information on Trump; I spoke without precision. What I mean is this: beliefs are informed by evidence, and we have little evidence, given the nature of the American election, of what a candidate will behave like when they aren't campaigning. I believe there's a history of president-elects moderating their stances once they take office, although I have no direct evidence to support myself there. When it comes to Islam, I should begin by saying that I'm sure the vast majority of Muslims simply want to live a decent life, just like the rest of us. However, theirs is the only religion active today that currently endorses holy war. Then observe that MAD only applies to people unwilling to sacrifice their children for their cause, and further observe that Islam, as an idea, a meme, a religion, has successfully been able to make people do exactly that. An American wouldn't launch a nuke if it would kill their children, and Russian wouldn't either. But a jihadist? From what I understand (which is admittedly not much on this topic), a jihadist just might. At least, the jihadist has a much higher probability of choosing a nuclear war over a nationalist. I agree that the West overreacts in terms of Terrorism, in the sense that any given person is more likely to die in a car accident than be killed by a terrorist, but I was referring to existential threats, a common topic on LW and one that Yudkowsky himself seems concerned wit

I was trying to say with my second paragraph that we specifically cannot be sure about that. My first paragraph was simply my best effort at interpreting what I think hairyfigment thinks, not a statement of what I believe to be true.

From my vague recollections I think the idea is worth looking up one way or the other. After all, a massive portion of modern culture is under the impression there are no gender differences and there are other instances of clear major misconceptions I actually can attest to throughout history. But I don't have any idea with the Romans.

0Lumifer
That's the stupid portion of modern culture, and I'm not sure they actually, um, practice that belief. Here's a quick suggestion: make competitive sports sex-blind :-/ I don't think it's massive, either.

Clarification please. How do you avoid this supposed vacuity applying to basically all definitions? Taking a quick definition from a Google Search: A: "I define a cat as a small domesticated carnivorous mammal with soft fur, a short snout, and retractile claws." B: "Yes, but is that a cat?"

Which could eventually lead back to A saying that:

A: "Yes you've said all these things, but it basically comes back to the claim a cat is a cat."

0TheAncientGeek
Definitions are at best a record of usage. Usage can be broadened to include social practices such as reward and punishment. And the jails are full of people who commit theft (selfishness) , rape (ditto), etc. And the medals and plaudits go to the brave (altruism), the generous (ditto), etc.

Maybe we should be abandoning the objectivity requirement as impossible. As I understand it this is in fact core to Yudkowsky's theory- an "objective" morality would be the tablet he refers to as something to ignore.

I'm not entirely on Yudkowsky's side in this. My view is that moral desires, whilst psychologically distinct from selfish desires, are not logically distinct and so the resolution to any ethical question is "What do I want?". There is the prospect of coordination through shared moral wants, but there is the prospect of coord... (read more)

0TheAncientGeek
Maybe we should also consider in parallel the question of whether objectivity is necessary. If objectivity is both necessary to morality and impossible, then nihilism results. The basic, pragmatic argument for the objectivity or quasi-objectivity of ethics is that it is connected to practices of reward and punishment, which either happen or not. The essential problem with the tablet is that it offers conclusions as a fait accompli, with no justification of argument. The point does not generalise against objectivity morality. if you are serious about the unselfish bit, then surely it boils down to "what do they want" or "what do we want". i don't accept the Moral Void argument, for the reasons given. Do you have another? The idea that humans are uniquely motivated by human morality isn't put forward as a an answer to the amoralist challenge, it is put forward as a a way of establishing something like moral objectivism.

The Open Question argument is theoretically flawed because it relies too much on definitions (see this website's articles on how definitions don't work that way, more specifically http://lesswrong.com/lw/7tz/concepts_dont_work_that_way/).

The truth is that humans have an inherent instinct towards seeing "Good" as an objective thing, that corresponds to no reality. This includes an instinct towards doing what, thanks to both instinct and culture, humans see as "good".

But although I am not a total supporter of Yudowksy's moral support, he... (read more)

0TheAncientGeek
True but not very interesting. The interesting question is whether the operations of intuitive black boxes can be improved on. The tablet argument is entirely misleading. i don't see what you mean by that. If the function of the ethical black bx can be identified, then it can be improved on, in the way that physics physics improves on folk physics. Those who define terms try to resolve the problem of ethical questions by bypassing this instinct and referencing instead what humans actually want to do. This is contradictory to human instinct, hence the philosophical force of the Open Question argument but it is the only way to have a coherent moral system. The alternative, as far as I can tell, would be that ANY coherent formulation of morality whatsoever could be countered with "Is it good?".

I think hairyfigment is of the belief that the Romans (and in the most coherent version of his claim you would have to say male and female) were under misconceptions about the nature of male and female minds, and believes that "a sufficiently deep way" would mean correcting all these misconceptions.

My view is that we really can't say that as things stand. We'd have to know a lot more about the Roman beliefs about the male and female minds, and compare them against what we know to be accurate about male and female minds.

0Lumifer
And what evidence do you have that they laboured under such major misconceptions which we successfully overcame?

On a purely theoretical level (which is fun to talk about so I think worth talking about) I would like to see one of the high status and respected members of the rationalist movement (Yudowsky, Hanson etc) in power. They'd become corrupt eventually, but do a lot of good before they did.

On a practical level, our choices are the traditional establishment (which has shown its major flaws), backing Trump, or possibly some time in the future backing Sanders. Unless somebody here has a practical way to achieve something different, that's all we have.

(EDIT: For w... (read more)

What is this even? I don't get it.

cousin_it, if you're still paying attention- I'm curious why you think this about Eliezer.

1TheAncientGeek
Why Massimo Pigliucci thinks something like that http://rationallyspeaking.blogspot.co.uk/2010/09/eliezer-yudkowsky-on-bayes-and-science.html

Major question. Where do you fit the kind of truth that comes from realising an idea is incoherent, therefore must be wrong?

(For clarity, my view is that the whole notion of 'affective truth' is just plain wrong, but I have nothing to say on that which hasn't been already said)

Good to know, but does that research clarify whether happiness is overall higher or lower in the long run?

0somervta
You mean on average? The studies I'm thinking of had small or no differences, but I'm pretty sure there are other results out there.

I can believe that that's true for a significant portion of humanity- that they would choose to have children even knowing it would be bad for their happiness in the long run. It isn't true for me, though, and there are large numbers of people for whom it isn't (or else childlessness in the West wouldn't have risen so much).

1[anonymous]
Having children fundamentally changes you, mentally. What may not have been a priority before, suddenly becomes a terminal value in itself once you bond with a little one. This is definitely something hard-wired into brains by evolution--ask any parent about their experience!
7Gunnar_Zarncke
I think there are too many confounding factors to make that connection.

Moreover, the policy signals you have bad social skills and are unlikely to spot lies. This doesn't matter much though if you strongly signal it in other ways already.

Also, if someone wanted to tarnish your reputation, they'd lie to you, get caught and try to make you act hostile when other people are around. You possibly hedge against this already. The other people, unless close friends, will be on the liar's side in a situation like this, no matter how justified you feel.

My policy: if I catch someone lying to me about something significant, I put them in... (read more)

6Lumifer
Because of your predictability. If you are guaranteed to react in a specific way to certain stimuli, that is useful to someone who wants to manipulate you.

I have an independent income. I demand a transfer, and if I don't get it I quit.

8Said Achmiz
This is certainly fortunate for you, but in defense of the point to which you were responding, it is actually broader: the question is, what if the person who is lying to you is someone on whom you depend for your livelihood — whoever that might be in your case?

If I go on about it enough in conversation, people will have to realise. I won't made it explicit directly to them, but them realising will discourage others.

Because it makes it obvious to people that I'm taking my policy seriously.

-2hyporational
Will you make that connection explicit to them afterwards too? Do you think other people make the connection? How?

A One Strike Rule. If I catch a person lying to me, I never hang out with them against unless I have no case. I also deliberately act in a rude and hostile manner.

However, this only applies if I've already warned them about the policy.

0jjvt
I suspect that tit for tat works better than grim trigger in the noisy environment of social interaction between humans. Your strategy also raises the question of how you tell lies and errors apart. Personally I never (fully) trust anyone, but still try to treat everyone friedlily (meaning that I'll help them if it costs me little, but I won't nesessarily spend resources on them). Additionally, to protect my own trustworhiness from lies and errors of others, I try not to forward information without also telling the source (not "X is Y", but "I heard from Z that X is Y").
3Lumifer
I feel your policy makes you more easily manipulable, not less.

If you told me this in person, I wouldn't want to hang out with you any more either.

2moridinamael
What if this person is your boss? Bear in mind that your boss has probably lied to you.
8hyporational
Luckily, I have a one strike rule against ultimatums. :) Why doesn't simply not trusting them work for you? How does being hostile to them further your interests?

Another thing I should note that it can simply be a matter of human preferences. I'm very uncomfortable with the idea of having any truely close relationship (lover or close friend) with somebody who would be willing to lie to me. I see no reason why other wants should somehow override this one.

Improving my social skills is HARD. I could invest a massive effort into it if I tried, but I'm at university right now and my marks would take a nosedive. It's not worth the price.

-2Strange7
When something is really hard to do, but everyone else seems to be doing it anyway, consider what that implies about the value of the result. Also, it doesn't necessarily have to be a matter of massive effort and formal analysis. There is the option of learning by exposure. Spend some free time (for example, time you would otherwise spend on Lesswrong) in undirected socialization with people you otherwise wouldn't talk to. Familiarize yourself with the rhythm, ask stupid questions and see how people react, and flee before committing to anything expensive, whether that expense is money, time, or willpower.
3Nornagest
Never claimed it wasn't. As a matter of cost-benefit analysis, though, I think you might nonetheless find it attractive in comparison to unilaterally declaring war on the liars of the world, which I'd expect to be strenuous, socially costly, and largely ineffective in preventing manipulation. As a matter of fact, drawing a sufficiently hard line on lying opens up entirely new avenues for manipulation of your trust.

True, but it is also true that you can't somebody on certain matters if they are willing to tell you white lies. It's better to try and hang around more honest types so you can learn to cope with the truth better.

5hyporational
I actually prefer the honest types, but don't judge normal people either. This preference is of minor importance. In most situations I can't choose who to interact with and being stubborn about it won't help.

I reject this idea for a fairly simple reason. I want to be in control of my own life and my own decisions, but due to lack of social skills I'm vulnerable to manipulation. Without a zero-tolerance policy on liars, I would rapidly be manipulated into losing what little control of my own life remains.

Without a zero-tolerance policy on liars, I would rapidly be manipulated into losing what little control of my own life remains.

I suspect this is inaccurate and you would be better off with rules like "I won't do large favors for friends who haven't reciprocated medium favors in the past" or "I won't be friends/romantic partners with people who tell me what to do in areas that are none of their business." Virtually none of the manipulation I've been harmed by in the past has involved actual lies. Though maybe your extended social circle (friends of friends of friends, people at university, etc.) has different preferred methods of manipulation than mine does.

9ChrisHallquist
I strongly suspect this is harming you in the long run, and you'd benefit from trying to work on your social skills. Does your social circle consist only of people whose social skills, feelings about lying, etc. are similar to yours? Also, do you think you can distinguish between "people who never lie to me" and "people who sometimes lie to me" more reliably than "people who are mostly honest but tell socially acceptable white lies" and "people who will manipulate me in ways that will seriously harm me"?
3ChristianKl
If you have no social skills do you have enough status and enough friends to still have friends to hang out with with a zero-tolerance policy.
3hyporational
How do you execute this zero tolerance policy? There's a vast space between alienating people and simply not trusting them.

You seem to be treating lack of social skills as a static attribute rather than a mutable trait. This may not be the most productive frame for the issue.

Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.

However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn't have any justification.

On thought, my response is that no circular argument can possibly be rational so the question of if rationality is binary is irrelevant. You are mostly right, though for some purposes rational/irrational is better considered as a binary.

0HoverHell
-

You are the only who is making assumptions without evidence and ignoring what I'm saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.

You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in prob... (read more)

0fortyeridania
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies. OK. Let me try to restate your argument in terms I can better understand. Tell me if I'm getting this right. (1) Let A = any agent and P = any proposition (2) Define "justified belief" such that A justifiably believes P iff the following conditions hold: * a. P is provable from assumptions a, b, c, ... and z. * b. A justifiably believes every a, b, c, ... and z. * c. A believes P because of its proof from a, b, c, ... and z. (3) The claim "The sun will rise tomorrow" (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing. (4) Therefore, for every agent, belief in the claim "The sun will rise tomorrow" is not justified. Is this a fair characterization of your argument? If so, I'll work from this. If not, please improve it.

I said earlier that I believe that rationally speaking, skepticism proves itself correct and ordinary ideas of rationalism prove themselves self-refuting. However, I believe on faith (in the religious sense) that skepticism is false, and have beliefs on faith accordingly.

Therefore, I sort of believe in a double truth, but in a coherent fashion.

You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.

You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.

Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.

0fortyeridania
OK, but you are not using the term "rational" in the (what I thought was) the standard way. So the only reason what you're saying seems contentious is because of your terminology. You have not yet addressed much of what I've written. Automatically rejecting everything that isn't 100% proven is a poor strategy if the agent's goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you're using the word "rational," or do you actually recommend "Reject everything that isn't known 100%" as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already--that you would not recommend the skeptical strategy.) How should an agent proceed, if she wants to have as accurate picture of reality as possible?

This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.

The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.

0fortyeridania
I do not think anything I wrote above depends on using probability to discuss reality. Please elaborate. I believe it is not only relevant, but decisive.

Not exactly Platonic- I have no belief whatsoever, on faith or reason, in ideal forms. As for why rationalism, I believe in it because rationalist arguments in this sense can be inherently self-justifying. This comes from starting from no assumptions.

However, I then show that such rationality fails in the long run to skeptical arguments of it's own sort, just as other types of rationality do. I focus on it because it is the only one with a halfway credible answer to skepticism.

I have already shown I know what skepticism is- not knowing anything whatsoever. You haven't refuted this argument, given that "I don't know" is a valid Epistemic state.

0ChristianKl
Those two positions contradict each other. You can't have both. You claim at the same time to believe that you know what skepticism happens to be and that you know nothing.

I think I know my answer to this- I've realised my definition of "rational" subtly differs from LessWrong's. When you see mine, you'll see this wasn't my fault.

A set of rules is rational, I would argue, if that set of rules by it's very nature must correlate with reality- if one applies those rules to the evidence, they must reveal what is true. Even if skepticism is false, then it is a mere coincidence that our assumptions the world is not an illusion, our memories are accurate etc happened to be correct as we had no rational rule that would sho... (read more)

0ChristianKl
But you are not applying it to everything. You have a strong belief in a platonic ideal of rationality on which you base your concept. Take the buddhists who actually don't attach themselves to mental concepts. They have sayings such as: "If you meet the Buddha on the road, kill him". You are not willing that you don't know what skepticism happens to be because you have attachement to it. This is exactly what Wittengsteins sentence is about. We shouldn't talk about those concepts. The buddhists also don't take in a rational sense about it. They meditate and have a bunch of koans but they are mystics. You just don't get to be a platonic idealist and no mystic and have skepticism be valid.

It seems we're using different definitions of words here. Maybe I should clarify a bit.

The definition of rationality I use (and I needed to think about this a bit) is a set of rules that must, by their nature, correlate with reality. Pragmatic considerations do not correlate with reality, no matter how pressing they may seem.

Rather than a rational obligation, it is a fact that if a person is irrational then they have no reason to believe that their beliefs correlate with the truth, as they do not. It is merely an assumption they have.

My conception of reason is based on determining what is true, completely and entirely irrespective of pragmatism. To call skeptical arguments irrational and call an anti-skeptical case rational would mean losing sight of the important fact that ONLY pragmatic considerations lead to the rejection of skepticism.

Rationality, to me, is defined as the hypothetical set of rules which reliably determine truth, not by coincidence, but because they must determine truth by their nature. Anything which does not follow said rules are irrational. Even if skepticism is... (read more)

0fortyeridania
You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below. OK, so I think your labeling system, which is clearly different from the one to which I am accustomed, looks like this: and If that's how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable--just not with absolute certainty. She would not have a very accurate picture of the world. And this is not just because of "pragmatics"; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs. Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn't first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests. Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long--possibly forever. And secondly (and this is the aspect I'm trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating). Now, how ab

In the real world, it depends. With most people in practice, assuming they have enough of an understanding of me to know I am a skeptic on these things and are implicitly asking for one or the other, I give that. Therefore I normally give advice on faith.

0fortyeridania
I guess it's hard for me to understand what's irrational about advising them to eat the rice (as you indicated you would do). It seems like the only sane choice. I'm not sure exactly what you mean by "faith", but if advising people to eat the rice is based on it, then it must be compatible with rationality, right? Right--choose the rice, assuming you (or they) want to live. That seems like the only sane choice, doesn't it? Maybe this is a problem of terminology. You seem to be using the labels "faith" and "reason" in certain ways. Especially, you seem to be using the label "reason" to refer to the following of certain rules, but which you can't see how to justify. Maybe instead of focusing on those rules (whatever they may happen to be), you should focus on why the rules are valuable in the first place (if they are). Presumably, it's because they reliably lead to success in achieving one's goals. The worth of the rules is contingent on their usefulness; it's not rational to believe only things you can prove with absolute certainty, because that would mean believing nothing, doing nothing, dying early and having no fun, and nobody wants that! (In case you haven't read it, you might want to check you Newcomb's Problem and Regret of Rationality, from 2008.)

"Better" isn't a function of the real world anyway- I'm appealing to it because most people here want to be rational, not because it is objectively better.

What do you mean by "rational" is not a binary?

0HoverHell
-
  1. The Evil Demon Argument says that you don't know that it's actually those three things before you. Further, it says that you don't know that eating the rice will actually have the effects you're used to, or that your memories can be used to remember your preferences. Etc etc...

  2. On reason, I would give no advice. On faith, I would say to have the rice.

0fortyeridania
So, which advice would you give?

I think we mean different things by "basis in reality". I use it to refer to something correlating with the real world, and evidence that demonstrates such a connection either probable or certain. Probability, of course, can only work if probability were somehow demonstrated valid.

Circular arguments do not count as a basis in reality, hence your argument, which assumes the existence of physical brains, does not work.

Nothing is justified if skepticism wins. Unless we have irrational faith in at least one starting assumption (and it is irrational since we have no basis for making the assumption), it is impossible to determine anything except our lack of knowledge.

So on thought, yes. There is never any valid rational reason to discriminate between possibilities because nothing can demonstrate the Evil Demon Argument false.

0fortyeridania
OK. I am still not exactly sure what you mean by "justification." Let's put this in more concrete terms. Imagine the following: 1. What does the Evil Demon Argument (and all in its family) say about the rationality of each choice, compared to the others (assuming it says anything at all)? 2. What advice would you personally give someone sitting at such a dinner table, and why?

Then you mean a different thing by "free will" then me- I was referring to free will in the popular conception.

0Richard_Kennaway
Then Sam Harris has written an entire book to demonstrate that when a tree falls in the forest and no-one is around to hear it, it doesn't make any sound.

This is precisely the problem. I was posting in the hopes of finding some clever solution to this problem- a self-proving axiom, as it were.

I don't believe in the reality around us, not on a rational level- that does not mean I don't believe there are things which are real(there may be, anyway). I just have no idea what they are.

Justification is DEFINED in a certain manner, and I think the best one to use is the definition I have given. That is how I can be certain about justification (or at least what I am calling justification) and a skeptic about reality.

If you have no non-circular basis for believing in induction, surely it is irrational?

0HoverHell
-

In reality, I believe non-skepticism on religious faith whilst thinking that rationally speaking skepticism is true. I slip up from time to time.

I should note, however, that a lot of my argument is that the rules of logic themselves suggest problems with beliefs as they currently stand- namely those surrounding circular arguments.

Ad hominem represents arguing based not on the evidence but on the character of the person giving it. This is bad because it leads people to instinctively ignore arguments from those they dismiss rather than considering them.

In this case it is also circular, as you presume the existence of the skeptic which you should not be able to know.

0Dagon
Wait. How do you justify any belief in what any statement or action will "lead people to" do?

A premise isn't self-evident because anybody whatsoever would accept it, but because it must be true in any possible universe.

Deductive arguments aren't self-evident, but for a different reason than you think- the Evil Demon Argument, which shows that even if it looks completely solid it could easily be mistaken. There may be some way to deal with it, but I can't think of any. That's why I came here for ideas.

You claim my standards of justification are too high because you want to rule skepticism out- you are implicitly appealing to the fact skepticism results as a reason for me to lower my standards. Isn't that bias against skepticism, lowering standards specifically so it does not result?

2pragmatist
There are all kinds of things that are true in every possible universe that aren't self-evident. Look up "necessary a posteriori" for examples. So no, self-evident is not the same as necessary, at least not according to a very popular philosophical approach to possible worlds (Kripke's). More generally, "necessity" is a metaphysical property, and "self-evidence" is an epistemic property. Just because a proposition has to be true does not mean it is going to be obvious to me that it has to be true. Even Descartes makes this distinction. He doesn't regard all the truths of mathematics to be self-evident (he says he may be mistaken about their derivation), but presumably he does not disagree that they are necessarily true. (Come to think of it, he may disagree that they are necessarily true, given his extreme theological voluntarism, but that's an orthogonal debate.) As for your question about standards: I think it is a very plausible principle that "ought" implies "can". If I (or anyone else) have an obligation to do something, then it must at least be possible for me to do it. So, in so far as I have a rational obligation to have justified beliefs, it must be possible for me to justify my beliefs. If you're using the word "justification" in a way that renders it impossible for me to justify any belief, then I cannot have any obligation to justify my beliefs in that sense. And if that's the case, then skepticism regarding that kind of justification has no bite. Sure, my beliefs aren't justified in that rigorous sense, but if I have no rational obligation to justify them, why should I care? So either you're using "justification" in a sense that I should care about, in which case your standards for justification shouldn't be so high as to render it impossible, or you're using "justification" in Descartes's highly rigorous sense, in which case I don't see why I should be worried, since rationality cannot require that impossible standard of justification. Either way, I

Implicit assumptions- not just the senses, but the reliability of memory and the reliability of rules of induction.

I already mentioned that I believe in the world, not because I think it rational, but as an act of religious-style faith. I think it irrational to believe the world exists because it makes so many assumptions that can't be defended in a rational argument.

Why are you applying ad hominem selectively? You wouldn't use an ad hominem argument in most things- why is the skeptic an exception?

0Dagon
This isn't ad-hominem. I don't care which skeptic it is. I'm simply pointing out a pretty severe inconsistency between stated beliefs and actions. I use a similar tactic on a lot of topics where I don't have the time or skill to do ground-up research (and to help decide where it's worth the time). If the proponents of an idea behave very inconsistently with the idea, I update more strongly on their behavior than their statements. The skeptic is making a prediction, that there is no probability or causality (they usually say "there is no basis for" rather than "there is no", but a quick recital of Tarski makes them equivalent). Anything can happen! If observed actions are inconsistent with that belief, that's evidence I should update on. Note that the skeptic can't use this information, because there's no justification for belief that observation has any information about reality.

This is the problem which must be dealt with. Rather than assume an assumption must be correct, you must somehow show it will work even if you start from no assumptions.

0Gurkenglas
Your universal propositional calculus might not be able to generate that proposition, but my calculus can easily prove: Yours won't generate any propositions if it has no axioms.

But we are talking about scepticism. It's an exception to the Wittgensteinian rule.

-1ChristianKl
I can also talk about weuisfdyhkj. It's a label. In itself not more meaningful than the label you use. You think that you know what the label means but if your brain can't simulate a reality behind the label it has no meaning. According to Wittgenstein we should therefore not speak about it.

Something is epistemically justified if, as you said, it has some sort of reality to it not by coincidence but because the rule reliably shows what is real. I am trying to find a framework with some sort of reality to it, and that requires dealing with scepticism.

0ChristianKl
If you don't believe in reality in the first place how could you check whether something has reality? You need to look at reality to check whether something is real. There no way around it. Your idea for justification has no solid basis in reality if you don't believe in it in the first place. You don't get to be certain about justification and be skeptic about reality. There are certain types of Buddhism who you could call skeptic about reality but they would also not accept the concept of justification in which you happen to believe.
Load More