You need to clarify your intentions/success criteria. :) Here's my What Actually Happened technique to the rescue:
(a) You argued with some (they seem) conventional philosophers on various matters of epistemology.
(b) You asked LessWrong-type philosophers (presumably having little overlap with the aforementioned conventional philosophers) how to do epistemology.
(c) You outlined some of the conventional philosophy arguments on the aforementioned epistemological matters.
(d) You asked for neuroscience pointers to be able to contribute intelligently.
(e) Most of the responses here used LessWrong philosophy counterarguments against arguments you outlined.
(f) You gave possible conventional philosophy countercounterarguments.
This is largely a failure of communication because the counterarguers here are playing the game of LessWrong philosophy, while you've played, in response, the game of conventional philosophy, and the games have very different win conditions that lead you to play past each other. From skimming over the thread, I am as usual most inclined to agree with Eliezer: Epistemology is a domain of philosophy, but conventional philosophers are mostly not the best at—or necessarily t...
Give up on justifying answers and just try to figure out what the answers really actually are, i.e., are you really actually inside an Evil Demon or not. Once you learn to quantify the reasoning involved using math, the justification thing will seem much more straightforward when you eventually return to it. Meanwhile you're asking the wrong question. Real epistemology is about finding correct answers, not justifying them to philosophers.
Without a justification, I cannot rationally believe in the truth of the senses.
Yeah you can. Like, are you wearing socks? Yes, you're wearing socks. People were capable of this for ages before philosophy. That's not about what's useful, it's about what's true. How to justify it is a way more complex issue. But if you lose sight of the fact that you are really actually in real life wearing socks, and reminding you of this doesn't help, you may be beyond my ability to rescue by simple reminders. I guess you could read "The Simple Truth", "Highly Advanced Epistemology 101 for Beginners", and if that's not enough the rest of the Sequences.
Externalism is always the answer! Accept that some unlucky people who are in sceptical scenarios would be doomed; but that doesn't mean that you, who are not in a sceptical scenario, are not, even though they're subjectively indistinguishable.
Warning: I am not a philosophy student and haven't the slightest clue what any of your terms mean. That said, I can still answer your questions.
1) Occam's Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes' Theorum, then you're entirely done. There's nothing else you can do. Sure, if you're unlucky then you'll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to th...
Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.
Invariantism, in my opinion, is rooted precisely in the failure to recognize that this is an empirical and ultimately linguistic question. I'm not sure how neuroscience would enter into it, actually. ...
Not a philosophy student, but it seems to me that your question is basicly this:
If everything is uncertain (including reality, state of my brain, etc.), how can I become certain about anything?
And the answer is:
Taking your question literally, you can't.
In real life, we don't take it literally. We don't start by feeling uncertain about literally everything at the same time. We take some things as granted and most people don't examine them (which is functionally equivalent to having axioms); and some people examine them step by step, but not all at the same time (which is functionally equivalent to circular reasoning).
To combat skepticism, or at least solipsism, you just need to realise that there are no certainties, but that does mean you know nothing. You can work probabilistically.
Consider: http://lesswrong.com/lw/mn/absolute_authority/ http://lesswrong.com/lw/mo/infinite_certainty/ http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/ http://wiki.lesswrong.com/wiki/Absolute_certainty
This book might be what you are looking for. It's Evidence and Inquiry by Susan Haack. I have it, but I've only done a few very cursory skims of it (ETA: It's on my summer reading list, though). It has two very positive reviews on Amazon. Also, she calls out the Gettier "paradoxes" for what they are (for the most part, pointless distractions).
I doubt people are actually still interesting, but just in case I've actually managed to solve this problem.
IF the Correspondence Theory of Truth is assumed (defining "Truth" as that which corresponds to reality) and the assumption is made that philosophy should pursue truth rather than what is pragmatically useful, then for any non-Strong Foundationalist method of determining truth the objection could be made that it could easily have no correlation with reality and there would be no way of knowing.
Probabalistic arguments fall apart because they...
Not trying to answer your questions, sorry. Just wanted to mention that different philosophical camps pattern-match to different denominations of the same religion. They keep arguing without any hope of agreeing. Occasionally some denominations prevail and others die out, or get reborn when a new convincing guru or a prophet shows up. If you have a strong affinity for theism, err, mainstream philosophy, just pick whichever denomination you feel like, or whichever gives you the best chance of advancement. If you care about that real world thing, consider de...
(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)
Doesn't Coherentism idea say that even if the knowledge is incorrect, it is still "true" for the observer because it coheres with the rest of their beliefs?
The opinion Eliezer says is essentially that yes, you c...
Warning: I am not a philosophy student and haven't the slightest clue what any of your terms mean. That said, I can still answer your questions.
1) Occam's Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes' Theorum, then you're entirely done. There's nothing else you can do. Sure, if you're unlucky then you'll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to the world around you. One and zero are not probabilities; you are never certain of anything, even those things you're probably getting used to calling a priori truths. Learn to abandon your intuitions about certainty; even if you could be certain of something, our default intuitions will lead us to make bad bets when certainty is involved, so there's nothing there worth holding on to. In any case, the right answer is understanding that beliefs are always always always uncertain. I'm pretty sure that 2 + 2 = 4, but I could be convinced otherwise by an overwhelming mountain of evidence.
2) I don't know what question is being asked here, but if it has no possible impact on the real world then you can't decide if it's true or false. Look at Bayes' Theorem; if probability (evidence given statement) is equal to probability (evidence) then your final belief is the same as your prior. If there is in principle no excitement you could run which would give you evidence for or against it, then the question is not really a question; knowing it was true or false would tell you nothing about which possible world you live in; it would not let you update your map. It is not merely useless but fundamentally not in the same class of statements as things like "are apples yellow?" or "should machines have legal rights, given "should" referring to generalized human preferences?" If there is an experiment you could run in principle, and knowing whether the statement is true or false would tell you something, then you simply have to refer to Occam's Razor to find your prior. You won't necessarily get an answer that's firmly one way or another, but you might.
3) I'll admit I had to look this up to give an answer. What I found was that there is literally not a question here. Go read A Human's Guide to Words (sequence on LW) to understand why, although I'll give a brief explanation. "Knowledge", the word, is not a fundamental thing. Nowhere is there inscribed on the Almighty Rock of Knowledge that "knowledge" means "justified true belief" or "correctly assigned >90% certainty" or "things the Flying Spaghetti Monster told you." It only has meaning as a symbol that we humans can use to communicate. If I made it clear that I was going to use the phrase "know x" to mean "ate x for breakfast", and then said "I know a chicken biscuit", I would be commiting an error; but that error would have nothing to do with the true meaning of "know". When I say "I know that the earth is not flat", I mean that I have seen pretty strong evidence that the earth really isn't flat, such that for it to be flat would require a severe mental break on my part or other similarly unlikely circumstances. I don't know it with certainty; I don't know anything with certainty. But that's not what "know" means in the minds of most people I speak with, so I can say "I know the world is not flat" and everyone around me gets the right idea. There is no such thing as a correct attribution of knowledge, nor an incorrect one, because knowledge is not a fundamental thing nor sharply defined, but instead it's a fuzzy shape in conceptspace which corresponds to some human intuitions about the world but not to the actual territory. Humans are biased towards concrete true/false dichotomies, but that's not how the real world works. Once you realize that beliefs are probabilities you'll realize how incredibly silly most philosophical discussions of knowledge are.
My quick advice to you in general (so that you can solve future problems like this on your own) is three-fold. First, learn Bayes and keep it close to you at all times. The Twelve Virtues of Rationality are nice for a way to remind yourself what it means to want to actually get the right answer. Second, read A Human's Guide to Words, and in particular play Rationalist Taboo constantly. Play it with yourself before you speak and with others when they use words like "knowledge" or "free will". Do not simply accept a vague intuition; play it until you're certain of what you mean (and it matches what you meant when you first said it), or certain that you have no idea. Pro tip: free will sounds like a pretty simple concept, but you have no idea how to specify it other than that thing that you can feel you have. (And any other specification fails to capture what you or anybody else really want to talk about). Third, and I'm sure some people will disagree here, but... Get the heck out of philosophy. There is almost nothing of value that you'll get from the field. Almost all of it is trash, because there really aren't enough interesting questions that don't require you to actually go out and do /gasp/ science to justify an entire field. Pretty much all the important ones have answers already, although you wouldn't know that by talking to philosophers. Philosophy was worthwhile in Ancient Greece when "philosopher" meant "aspiring rationalist" and human knowledge was at the stage of gods controlling everything, but in the modern day we already have the basic rationalist's toolkit available for mass consumption. Any serious advance made in the Art will come from needing it to do something that the Art you were taught couldn't do for you, and such advances are what philosophy should be, but isn't, providing. You won't find need of a new rationalist Art if you're trying to convince other people, who by definition do not already have this new Art, of some position that you stumbled upon because of other people who argued it convincingly to you. If you care about the human state of knowledge, go into any scientific discipline. Otherwise just pick literally anything else. There's nothing for you in philosophy except for a whole lot of confused words.
Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree - don't get a philosophy degree. Take some electives, sure - that'll give you an introduction to the field - but after that there's absolutely no reason to pay for a philosophy degree. If you're interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree.
So, now that that's out of the way: philosophy isn't useless. In fact, at its more useful end it blurs pretty seamle...
I have naturally read the material here, but am still not sure how to act on two questions.
1: I've been arguing out the question of Foundationalism v.s Coherentism v.s other similiarly basic methods of justifying knowledge (e.g. infinitism, pragmatism). The discussion left off with two problems for Foundationalism.
a: The Evil Demon argument, particularly the problem of memory. When following any piece of reason, an Evil Demon could theoretically fool my reason into thinking that it had reasoned correctly when it hadn't, or fool my memory into thinking I'd reasoned properly before with reasoning I'd never done. Since a Foundationalist either is a weak Foundationalist (and runs into severe problems) or must discard all but self-evident and incorrigible assumptions (of which memory is not one), I'm stuffed.
(Then again, it has been argued, if a Coherentist were decieved by an evil demon they could be decieved into thinking data coheres when it doesn't. Since their belief rests upon the assumption that their beliefs cohere, should they not discard if they can't know if it coheres or not? The seems to cohere formulation has it's own problem)
b: Even if that's discarded, there is still the problem of how Strong Foundationalist beliefs are justified within a Strong Foundationalist system. Strong Foundationalism is neither self-evident nor incorrigible, after all.
I know myself well enough to know I have an unusually strong (even for a non-rationalist) irrational emotive bias in favour of Foundationalism, and even I begin to suspect I've lost the argument (though some people arguing on my side would disagree). Just to confirm, though- have I lost? What should I do now, either way?
2: What to say on the question of skepticism (on which so far I've technically said nothing)? If I remember correctly Elizier has spoken of philosophy as how to act in the world, but I'm arguing with somebody who maintains as an axiom that the purpose of Philosophy is to find truth, whether useful or useless, in whatever area is under discussion.
3: Finally, how do I speak intelligently on the Contextualist v.s Invariantist problem? I can see in basic that it is an empirical problem and therefore not part of abstract philosophy, but that isn't the same thing as having an answer. It would be good to know where to look up enough neuroscience to at least make an intelligent contribution to the discussion.