1 min read31st Jan 2015100 comments

3

Standard methods of inferring knowledge about the world are based off premises that I don’t know the justifications for. Any justification (or a link to an article or book with one) for why these premises are true or should be assumed to be true would be appreciated.


Here are the premises:

  • “One has knowledge of one’s own percepts.” Percepts are often given epistemic privileges, meaning that they need no justification to be known, but I see no justification for giving them epistemic privileges. It seems like the dark side of epistemology to me.

  • “One’s reasoning is trustworthy.” If one’s reasoning is untrustworthy, then one’s evaluation of the trustworthiness of one’s reasoning can’t be trusted, so I don’t see how one could determine if one’s reasoning is correct. Why should one even consider one’s reasoning is correct to begin with? It seems like privileging the hypothesis, as there are many different ways one’s mind could work, and presumably only a very small proportion of possible minds would be remotely valid reasoners.

  • “One’s memories are true.” Though one’s memories of how the world works gives a consistent explanation of why one is perceiving one’s current percepts, a perhaps simpler explanation is that the percepts one are currently experiencing are the only percepts one has ever experienced, and one’s memories are false. This hypothesis is still simple, as one only needs to have a very small number of memories, as one can only think of a small number of memories at any one time, and the memory of having other memories could be false as well.




Edit: Why was this downvoted? Should it have been put in the weekly open thread instead?
New Comment
100 comments, sorted by Click to highlight new comments since: Today at 2:42 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

With the (important) proviso that "knowledge", "trustworthy", and "are true" need to be qualified with something like "approximately, kinda-sorta, most of the time", I think these premises should be assumed as working hypotheses on the grounds that if they are badly wrong then we're completely screwed anyway, we have no hope of engaging in any sort of rational thought, and all bets are off.

Having adopted those working hypotheses, we look at the available evidence and it seems like it fits them pretty well (note: this isn't a trivial consequence of having taken those things as working hypotheses; e.g., if we'd assumed that our perception, reasoning and memory are perfectly accurate then we'd have arrived quickly at a contradiction). Those propositions now play two roles in our thinking: as underlying working assumptions that are mostly assumed implicitly, and as conclusions based on thinking about things as clearly as we know how.

There's some circularity here, but how could there not be? One can always keep asking "why?", like a small child, and sooner or later one must either refuse to answer or repeat oneself.

Somewhat-relevant old LW post: The lens that sees its flaws. My memory was of it being more relevant than it appears on rereading; I wonder whether there's another post from the "Sequences" that I was thinking of instead.

1G0W519y
I see that circularity seems inevitable in order to believe anything, but would that really make circularity okay? I recall a post about Yudkowsky made that seems like what you're talking about, but I can't find it. I think it was in Highly Advanced Epistemology 101 for Beginners
8gjm9y
Well, what does "OK" mean? Suppose, e.g., that something like the following is true: those assumptions are the minimal assumptions you need to make to get rational thinking off the ground, and making them does suffice to support everything else you need to do, and they are internally consistent in the sense that when you make those assumptions and in estimate further you don't turn up any reason to think those assumptions were wrong. What more would it take to make assuming them "OK"? If "OK" means that they're provable from first principles without any assumptions at all and suffice to ground rational empirical investigation of the world then no, they're not OK -- but there's good reason to think nothing else is or could be. If what you're aiming for is a rational empiricism with minimal assumptions then I think something like this is optimal. I'm happy calling that "OK".
1G0W519y
By okay, I mean an at least least somewhat accurate method of determining reality (i.e. the generator of percepts). Given I don't know how to tell what percepts I've perceives, I don't see how standard philosophy reflects reality.
1gjm9y
It sure seems (to me) as if those assumptions give an at least somewhat accurate method of determining reality. Are you saying you don't think they do -- or is your actual objection that we don't know that with whatever degree of certainty you're looking for?
1G0W519y
I don't see how we have any evidence at all that those assumptions give at least a somewhat accurate method of determining reality. The only way I know of of justifying those axioms is by using those axioms.
1gjm9y
The other ways would be (1) because they seem obviously true, (2) because we don't actually have the option of not adopting them, and (3) because in practice it turns out that assuming them gives what seem like good results. #1 and #3 are pretty much the usual reasons for adopting any given set of axioms. #2 also seems very compelling. Again, what further OK-ness could it possibly be reasonable to look for?
0G0W519y
I don't see how this is evidence. Why can't we? Can't we simply have no beliefs at all? What makes you think it has good results? Don't you need to except the axioms in order to show that they have good results? E.g. You see that you follow the axioms and have a good life, but doing so assumes you know your percepts, your memory of using the axioms and being fine is true, and that your current reasoning about this being a good reason to believe these axioms is valid.
6gjm9y
I dare say you can in principle have no conscious beliefs at all. Presumably that's roughly the situation of an ant, for instance. But your actions will still embody various things we might as well call beliefs (the term "alief" is commonly used around here for a similar idea) and you will do better if those match the world better. I'm betting that I can do this better by actually having beliefs, because then I get to use this nice big brain evolution has given me. Yes. (I've said this from the outset.) Note that this doesn't make the evidence for them disappear because it's possible (in principle) for the evidence to point the other way -- as we can see from the closely parallel case where instead we assume as a working hypothesis that our perception and reasoning and memory are perfect, engage in scientific investigation of them, and find lots of evidence that they aren't perfect after all. It seems that you want a set of axioms from which we can derive everything -- but then you want justification for adopting those axioms (so they aren't really serving as axioms after all), and "they seem obvious" won't do for you (even though that is pretty much the standard ground for adopting any axioms) and neither will the other considerations mentioned in this discussion. So, I repeat: What possibly conceivable outcome from this discussion would count as "OK" for you? It seems to me that you're asking for something provably impossible. (That is: If even axioms that seem obvious, can't be avoided, and appear to work out well in practice aren't good enough for you to treat them as axioms, then it looks like your strategy is to keep asking "why?" in response to any proposed set of axioms. But in that case, as you keep doing this one of two things must necessarily happen: either you will return to axioms already considered, in which case you have blatant circular reasoning of the sort you are already objecting to only worse, or else the axioms under consideration will get u
0G0W519y
I don't know why you think ants presumably have no conscious beliefs, but I suppose that's irrelevant. Anyways, I don't disagree with what you said, but I don't see how it entails that one is incapable of having no beliefs. You just suggest that having beliefs is beneficial. "Okay," as I have said before, means to have a reasonable chance of being true. Anyways, I see your point; I really do seem to be asking for an impossible answer.
0TheAncientGeek9y
Coherentist epistemology can be seen as an attempt to make circularity OK.

With such skepticism, how are you even able to write anything, or understand the replies? Or do anything at all?

0G0W519y
Because I act as if I'm not skeptical. (Of course, I can't actually know that the last sentence as true, or this statement, or this, and so on).
-2TheAncientGeek9y
Thats one of the standard responses to scepticism: Tu quoque, or performative contradiction.
2Richard_Kennaway9y
I wasn't intending the question rhetorically. If G0W51 is so concerned with universal scepticism, how does he manage to act as if he wasn't, which he observes he does?

I think you misspelled "skepticism" in the title.

0G0W519y
Thanks. Edited. I proofread the article multiple times, but I suppose I forgot about the title.

Also, your argument (including what you have said in the comments) is something like this:

Every argument is based on premises. There may be additional arguments for the premises, but those are arguments will themselves have premises. Therefore either 1) you have an infinite regress of premises; or 2) you have premises that you do not have arguments for; or 3) your arguments are circular.

Assuming (as you seem to) that we do not have an infinite regress of premises, that means either that some premises do not have arguments for them, or that the arguments a... (read more)

Memories can be collapsed under percepts.

In answer to your broader question - yup: you've hit upon epistemic nihilism, and there is no real way around it. Reason is Dead, and we have killed it. Despair.

...Or, just shrug and decide that you are probably right but you can't prove it. There's plenty of academic philosophy addressing this (See: Problem of Criterion) and Lesswrong covers it fairly extensively as well.

http://lesswrong.com/lw/t9/no_license_to_be_human/ and related posts.

http://lesswrong.com/lw/iza/no_universally_compelling_arguments_in_math_or/

R... (read more)

1Fivehundred9y
Why does everyone refer to it as "epistemic nihilism"? Philosophical skepticism ('global' skepticism) was always the term I read and used.
2gjm9y
Everyone? In this discussion right here, the only occurrences of the word "nihilism" are in Ishaan's comment and your reply?
1Fivehundred9y
In general. I hear the word used but I haven't ever encountered it in literature (which isn't very surprising since I haven't read much literature). Seriously, Google 'epistemic nihilism' right now and all you get are some cursory references and blogs.
2gjm9y
Maybe I wasn't clear: I'm questioning whether the premise of your question is correct. I don't think everyone does refer to it that way, whether "everyone" means "everyone globally", "everyone on LW", "everyone in the comments to this post", or in fact anything beyond "one or two people who are making terminology up on the fly or who happen to want to draw a parallel with some other kind of nihilism".
0Fivehundred9y
I've heard it from various people on the internet. Perhaps I don't have a large sample size, but it seems to consistently pop up when global skepticism is discussed.
1Ishaan9y
At first I just made it up, feeling that it was appropriate name due to the many parallels with moral nihilism, then I googled it, and description that came up roughly matched what I was talking about, so I just went on using it after that. I'm guessing everyone goes roughly through that process. Normally I add a little disclaimer about not being sure that if it is the correct term, but I didn't this time. I didn't know the term "philosophical skepticism", thanks for giving me the correct one. In philosophy I feel there is generally problem where the process of figuring out the names that other people who separately came up with your concept before you did use to describe the concept you want ends up involving more work and reading than just re-doing everything...and at the end of the day others who read your text (as if anyone is reading that closely!) won't understand what you meant unless they too go back and read the citations. So I think it's often better to just throw aside the clutter and start fresh for everything, doing your best with plain English, and it's okay if you redundantly rederive things (many strongly disagree with me here). I feel that the definition of "Epistemic nihilism" is self evident as long as one knows the words "epistemic" and "nihilism". The term "Skepticism" implies the view that one is asking "how do you know", whereas nihilism implies that one is claiming that there is no fundamental justification of the chosen principles. If indeed I'm describing the same thing, I kinda think "epistemic nihilism" is a more descriptive term from a "plain english" perspective overall. (Also, re: everyone - I haven't actually seen that term used in the wild by people who are not me unless explicitly googling it. Maybe your impression results from reading my comments somewhere else?)
0Fivehundred9y
'Global skepticism' is really the correct one. 'Philosophical skepticism' is just a broad term for the doubting of normative justifications or knowledge. I doubt it very much. But some of the comments gave me the impression that it is in literature somewhere.
1Richard_Kennaway9y
"Epistemic nihilism" is not a name, but a description. Philosophical skepticism covers a range of things, of which this is one.
-1TheAncientGeek9y
However, our concepts of truth and goodness allow us to pose the questions:standard responsesIsm persuaded of it, but is it really true? I approve of it, but is it really good? The simple version of internalising truth and goodness by rubber stamping prevailing attitudes is not satisfactory; the complex version......is complex.
0Ishaan9y
Yes, that is precisely the relevant question - and my answer is that there's no non-circular justification for a mind's thoughts and preferences (moral or otherwise), and so for both practical and theoretical purposes we must operate on a sort of "faith" that there is some validity to at least some of our faculties, admitting that it is ultimately just simple faith that lies at the bottom of it all. (Not faith as in "believing without evidence", but faith as in "there shall never be any further evidence or justification for this, yet I find that I am pursuaded of it / do believe it is really good.) It's not really that bad or complex - all you have to believe in is the concept of justification and evidence itself. To attempt to go deeper is to ask justification for the concept of justification, and evidence for what is or is not evidence, and that's nonsense.
0TheAncientGeek9y
Ostensibly, those are meaningful questions. It would be comvemeimt if any question you couldn't answer was nonsense, but...
0Ishaan9y
Agreed, I don't think the question is meaningless. I do, however, think that it's "provably" unanswerable (assuming you provisionally accept all the premises that go into proving things)
0TheAncientGeek9y
But that isn't the same thing at all. If you have a fiundationalistic epistemic structure resting on proveably unprovable foundations, you are in big trouble.

gjm has mentioned most of what I think is relevant to the discussion. However, see also the discussion on Boltzmann brains.

Obviously, if you say you are absolutely certain that everything we think is either false or unknown, including your own certainty of this, no one will ever be able to "prove" anything to you, since you just said you would not admit any premise that might be used in such a proof.

But in the first place, such a certainty is not useful for living, and you do not use it, but rather assume that many things are true, and in the second place, this is not really relevant to Less Wrong, since someone with this certainty already supposes that he knows that he can never be less wrong, and therefore will not try.

0G0W519y
I never said I was absolutely certain everything we think is either false or unknown. I'm saying that I have no way of knowing if it is false or unknown -- I am absolutely uncertain.

Assume they're approximately true because if you don't you won't be able to function. If you notice flaws, by all means fix them, but you're not going to be able to prove modus ponens without using modus ponens.

-1G0W519y
Given that I don't know of a justification for the premises, why should I believe that they are needed to function? A side note: One can prove modus ponens with truth tables.
2DanielLC9y
If you assume truth tables, you can prove modus ponens. If you assume modus ponens, you can prove truth tables. But you can't prove anything from nothing. If you're looking for an argument that lets you get somewhere without assumptions, you won't find anything. There is no argument so convincing you can convince a rock.
-1G0W519y
I suppose the question then is why we should make the necessary assumptions.
1dxu9y
There is no "why". If there was, then the assumptions wouldn't be called "assumptions". If you want to have a basis for believing anything, you have start from your foundations and build up. If those foundations are supported, then by definition they are not foundations, and the "real" foundations must necessarily be further down the chain. Your only choice is to pick suitable axioms on which to base your epistemology or to become trapped in a cycle of infinite regression, moving further and further down the chain of implications to try and find where it stops, which in practice means you'll sit there and think forever, becoming like unto a rock. The chain won't stop. Not unless you artificially terminate it.
0TheAncientGeek9y
So it's Ok to use non rationalist assumptions?
-1dxu9y
I haven't the slightest idea what you mean by "non rationalist" (or "Ok" for that matter), but I'm going to tentatively go with "yes", if we're taking "non rationalist" to mean "not in accordance with the approach generally advocated on LessWrong and related blogs" and "Ok" to mean "technically allowed". If you mean something different by "non rationalist" you're going to have to specify it, and if by "Ok" you mean "advisable to do so in everyday life", then heck no. All in all, I'm not really sure what your point is, here.
0TheAncientGeek9y
Your guesses are about right:. The significance is that if rationalists respond to sceptical challenges by assuming what they can't prove, then they are then in the same position as reformed epistemology. That is, they can't say why their axioms are rational, and can't say why theists are irrational, because theists who follow RE are likewise taking the existence of God as something they are assuming because they can't prove it: rationalism becomes a label with little meaning.
-1dxu9y
So you're saying that taking a few background axioms that are pretty much required to reason... is equivalent to theism. I think you may benefit from reading The Fallacy of Grey, as well as The Relativity of Wrong.
0TheAncientGeek9y
The axioms of rationality are required to reason towards positive conclusions about a real world. They are not a minimal set, because sceptics have a smaller set, which can do less.
0dxu9y
Most people probably aren't satisfied with the sort of "less" that universal skepticism can do. Also, some axioms are required to reason, period. Let's say I refuse to take ~(A ∧ ~A) as an axiom. What now? (And don't bring up paraconsistent logic, please--it's silly.)
-1TheAncientGeek9y
Rational axioms do less than theistic axioms, and a lot of people arent happy with that "less" either.
0dxu9y
Not in terms of reasoning "towards positive conclusions about a real world", they don't. Most of whom are theists trying to advance an agenda. "Rational" axioms, on the other hand, are required to have an agenda.
-1TheAncientGeek9y
From the scepti.cs perspective, rationalists are advancing the agenda that there is a knowable external world.
-1TheAncientGeek9y
No. They do less in terms of the soul and things like that, which theists care about, and rationalists don't. Meanwhile, sceptics don't care about the external world. So everything comes down, to epistemology, and epistemology comes down to values. Is that problem?
0dxu9y
And yet strangely enough, I have yet to see a self-proclaimed "skeptic" die of starvation due to not eating. EDIT: Actually, now that I think about it, this could very easily be a selection effect. We observe no minds that behave this way, not because such minds can't exist, but because such minds very quickly cease to exist.
-1TheAncientGeek9y
They have answers to that objection , just as rationalists have answers to theists' objections.
-1G0W519y
If there is no why, is any set of axioms better than any other? Could one be just as justified believing that, say, what actually happened is the opposite of what one's memories say?
0dxu9y
(Note: I'm going to address your questions in reverse order, as the second one is easier to answer by far. I'll go into more detail on why the first one is so hard to answer below.) Certainly, if you decide to ignore probability theory, Occam's Razor, and a whole host of other things. It's not advisable, but it's possible if you choose your axioms that way. If you decide to live your life under such an assumption, be sure to tell me how it turns out. At this point, I'd say you're maybe a bit confused about the meaning of the word "better". For something to be "better" requires a criterion by which to judge that something; you can't just use the word "better" in a vacuum and expect the other person to be able to immediately answer you. In most contexts, this isn't a problem because both participants generally understand and have a single accepted definition of "better", but since you're advocating throwing out pretty much everything, you're going to need to define (or better yet, Taboo) "better" before I can answer your main question about a certain set of axioms being better than any other.
-1G0W519y
Why would one need to ignore probability theory and Occam's Razor? Believing that the world is stagnant and that the memories one is currently thinking of are false, and that the memory of having more memories is false, seems to be a simple explanation to the universe. By better, I mean "more likely to result in true beliefs." Or if you want to taboo true, "more likely to result in beliefs that accurately predict percepts."
1ike9y
If I were to point out that my memories say that making some assumptions tend to lead to better perception predictions (and presumably yours also), would you accept that? Are you actually proposing a new paradigm that you think results in systematically "better" (using your definition) beliefs? Or are you just saying that you don't see that the paradigm of accepting these assumptions is better at a glance, and would like a more rigorous take on it? (Either is fine, I'd just respond differently depending on what you're actually saying.)
-1G0W519y
I'd only believe it if you gave evidence to support it. The latter. What gave you the suggestion that I was proposing an improved paradigm?
0ike9y
You seemed to think that not taking some assumptions could lead to better beliefs, and it wasn't clear to me how strong your "could" was. You seem to accept induction, so I'll refer you to http://lesswrong.com/lw/gyf/you_only_need_faith_in_two_things/
0G0W519y
Though the linked article stated that one only needs to believe that induction has a non-super-exponentially small chance of working and that a single large ordinal is well-ordered, but it did really justify this. It spoke nothing about why belief in one's percepts and reasoning skills is needed.
-1dxu9y
Not in the sense that I have in mind. Unfortunately, this still doesn't solve the problem. You're trying to doubt everything, even logic itself. What makes you think the concept of "truth" is even meaningful?

I would agree if you can't trust your reasoning then you are in a bad spot. Even Descartes 'Cogito ergo sum' doesn't get you anywhere if you think the 'therefore' is using reasoning. Even that small assumption won't get you too far but I would start with him.

1[anonymous]9y
A better translation (maybe -- I don't speak french) would be "I think, I am". Or so said my philosophy teacher..
3Kindly9y
That seems false if taken at face value: "ergo" means "therefore", ergo, "Cogito ergo sum" means "I think, therefore I am". Also, I have no clue how to parse "I think, I am". Does it mean "I think and I am"? There's probably a story behind that translation and how it corresponds to Descartes's other beliefs, but I don't think that "I think, I am" makes sense without that story. (A side note: it's Latin, not French. I originally added here that Descartes wrote in Latin, but apparently he originally made the statement in French as "Je pense donc je suis.")
-1G0W519y
The problem with that is that I don't see how "Cogito ergo sum" is reasonable.
2is4junk9y
I don't think there is a way out. Basically, you have to continue to add some beliefs in order to get somewhere interesting. For instance, with just belief that you can reason (to some extent) then you get to a self existence proof but you still don't have any proof that others exist. Like Axioms in Math - you have to start with enough of them to get anywhere but once you have a reasonable set then you can prove many things.
1g_pepper9y
You obviously could not be thinking if you do not exist, right?
-1G0W519y
I don't know, as I don't truly know if I am thinking. Even if you proved I was thinking, I still don't see why I would believe I existed, as I don't know why I should trust my reasoning.
3g_pepper9y
You may not know you are thinking, but you think you are thinking. Therefore, you are thinking.
-1G0W519y
I don't actually think I am thinking. I am instead acting as if I thought I was thinking. Of course, I don't actually believe that last statement, I just said it because I act as if I believed it, and I just said the previous sentence for the same reasoning, and so on.
1g_pepper9y
It seems to me that this statement implies your existence; after all, the first two words are an existential declaration. Furthermore, just as (per Descartes) cognition implies existence, so it would seem that action implies existence, so the fact that you are acting in a certain way implies your existence. Actio, ergo sum.
-1G0W519y
But how can I know that I'm acting?
2g_pepper9y
You stated that you were acting: I took you at your word on that :). Anyway, it seems to me that you either are thinking, think you are thinking, are acting, or think you are acting. Any of these things implies your existence. Therefore, you exist.
0G0W519y
I think we're not on the same page. I'll try to be more clear. I don't really believe anything, nor do I believe the previous statement, nor do I believe the statement before this, nor the one before this, and so on. Essentially, I don't believe anything I say. That doesn't mean what I say is wrong, of course; it just means that it can't really be used to convince me of anything. Similarly, I say that I'm acting as if I accepted the premises, but I don't believe in this either. Also, I'm getting many dislikes. Do you happen to know why that it? I want to do better.
0dxu9y
It seems to me that at this point, your skepticism is of the Cartesian variety, only even more extreme. There's a reason that Descartes' "rationalism" was rejected, and the same counterargument applies here, with even greater force.
0G0W519y
What's the counterargument? Googling is didn't find it.
3dxu9y
Basically, Cartesian rationalism doesn't really allow you to believe anything other than "I think" and "I am", which is not the way to go if you want to hold more than two beliefs at a time. Your version is, if anything, even less defensible (but interestingly, more coherent--Descartes didn't do a good job defining either "think" or "am"), because it brings down the number of allowable beliefs from two--already an extremely small number--to zero. Prescriptively speaking, this is a Very Bad Idea, and descriptively speaking, it's not at all representative of the way human psychology actually works. If an idea fails on both counts--both descriptively and prescriptively--you should probably discard it.
0G0W519y
In order to create an accurate model of psychology, which is needed to show the beliefs are wrong, you need to accept the very axioms I'm disagreeing with. You also need to accept them in order to show that not accepting them is a bad idea. I don't see any way to justify anything that isn't either based on unfounded premises or circular reasoning. After all, I can respond to any argument, no matter how convincing, and say, "Everything you said makes sense, but I have no reason to believe my reasoning's trustworthy, so I'll ignore what you say." My question really does seem to have no answer. I question how important justifying the axioms is, though. Even though I don't believe any of the axioms are justified, I'm still acting as if I did believe them.
0dxu9y
You keep on using the word "justified". I don't think you realize that when discussing axioms, this just plain doesn't make sense. Axioms are, by definition, unjustifiable. Requesting justification for a set of axioms makes about as much sense as asking what the color of the number 3 is. It just doesn't work that way.
0G0W519y
I used incorrect terminology. I should have asked why I should have axioms.
0TheAncientGeek9y
It may be unacceptable to ask for justification of axioms, but that does not make it acceptable to assume axioms without justification.
0dxu9y
In what meaningful sense are those two phrasings different?

Standard methods of inferring knowledge about the world are based off premises that I don’t know the justifications for.

How do you come to that conclusion?

-1G0W519y
By realizing that the aforementioned premises seem necessary to prove anything.
3ChristianKl9y
Why do you believe "inferring knowledge" is about "proving"?
2G0W519y
Because to infer means to conclude knowledge from evidence, and proving means to show something is true by using evidence. They are essentially synonyms.
0ChristianKl9y
There are many cases of knowledge that aren't about X is true. When it comes to the knowledge required to tie the shoelaces of a shoe there isn't a single thing that has to be shown to be true by evidence. Basically you lack skepticism about the issue that you think you know what knowledge is about.
0G0W519y
There are multiple things that must be true by evidence to tie shoelaces successfully, including: * One's shoes are untied. * Having untied shoes generally decreases utility. * Performing a series of muscle movements that is commonly known as "tying your shoes" typically results in one's shoelaces being tied. Edit: fixed grammar.
0ChristianKl9y
You are make assumptions that are strong for claiming to be a skeptic. To go through them: 1) Tied shoelaces also allow you to tie them again. Untiedness is no requirement for tying. 2) If you are in a social environment where untied shoes are really cool then tying them might decrease your utility. At the same time tying them still makes them tied. 3) It's quite possible to tie your shoes through muscles movements that are not commonly used for tying your shoes.
0G0W519y
Okay, I really shouldn't have stated those specifics. Instead, in order to tie shoe-laces successfully, all one really needs to know is that performing a series of actions that are commonly known as "tying your shoes" typically results in one's shoelaces being tied. I never said that the muscle movement were common, just that they typically resulted in tied shoes. That said, I'm not really sure how this is relevant. Could you explain?
0[anonymous]9y
I think the way Eliezer deals with this is a fine example of appeal to consequences.
[-][anonymous]9y10

Any introduction or reader on contemporary epistemology that you'd find on amazon would address these three points.

1TheAncientGeek9y
Addresses rather than resolves. There are many responses, but no universally satisfactory ones.

You can think of what the points mean in the technical sense and try not to read anything more into them.

1) You sense something, your brain state is conditional on atleast some part of the universe. Do not make assumtions on whether it's a "fair" or "true" representation. At the most extreme you could have a single bit of information and for example no insight on how that bit is generated (ie by default and from epistemoligcal first grounds our behaviour is opaque).

2) We move from one computation state to another based on non-vanishingl... (read more)

0ChristianKl9y
Referring someone to Hegel seems to me like saying: "Screw you. Here a book that contains the answer to your question but you won't finish the book anyway."

“One has knowledge of one’s own percepts.” Percepts are often given epistemic privileges, meaning that they need no justification to be known, but I see no justification for giving them epistemic privileges. It seems like the dark side of epistemology to me.

Why? I realize that Yudkowsky isn't the most coherent writer in the universe, but how the heck did you get from here to there?

A simple qualia-based argument against skepticism (i.e. percepts are simply there and can't be argued with) is problematic- even if you conceded direct knowledge of percepts, ... (read more)

0G0W519y
I'm afraid we're not on the same page. From where to where? I understand that believing in qualia is not sufficient to form sizable beliefs, but it is necessary, is it not?
0Fivehundred9y
What does 'dark side epistemology' have to do with an argument that seems like a non-sequitur to you? The hell I know. There certainly are arguments that don't involve qualia and are taken seriously by philosophy; I'm not going to be the one to tackle them all! This website might have some resources, if you're interested.
0G0W519y
The arguments in the OP don't seem like non-sequiturs, as they are assumed without evidence, not with faulty reasoning from premises. Believing one doesn't need evidence for beliefs is what dark side epistemology is all about.