Going from the cited examples alone, it seems that most of the diversity in answers may be caused not so much by "different intuitions", but vagueness of questions, as they can be interpreted in many different ways, effectively forcing the respondents to give answers to different questions selected more or less arbitrarily, starting from the vague statements of the questions. That is, the differing intuitions are not intuitions about properties of complicated situations being discussed, but intuitions about how vague words such as "knows" or "refers to" are to be interpreted in the given context.
A lot more tabooing might need to be done before such questionnaires can start indicating differences in intuition about substantive questions. Alternatively, thought experiments phrased as decision problems (such as the trolley problem) mostly avoid this issue, if they don't ask about characterizations of the situation other than the decision that is to be made (such as whether by throwing or not throwing the switch one becomes "responsible" for the deaths).
Right; the point of these thought experiments is to elicit intuitions about non-substantive questions, like what "know" means. Welcome to philosophy.
these kinds of quesions are asked to resolve "what does X mean" questions
Resolving the meaning of vague terms is a pointless activity/bad methodology. One should focus of seeking and answering better questions motivated by the same considerations that motivate the original vague questions instead. This involves asking "What motivates/causes the vague question?" rather than "What does the vague question mean?" as the first step, where the "vague question" is a real-world phenomenon occurring in a scholar's mind.
Sometimes, the cause of a question turns out to be uninteresting, a bug in perception of the world, which dissolves the question. Sometimes, the causes of a question turn out to have interesting and complicated structure and you need a whole lot of new ideas to characterize them. This way, "What is motion?" points towards ideas such as time, velocity, acceleration, inertia, mass, force, momentum, energy, impulse, torque, simultaneity, continuity, differential and integral calculus, etc., which were not there in the heads of the philosophers who first wondered about motion.
Harvard Prof. Richard Moran touches on this in a humorous manner:
"As to ‘experimental philosophy, I can’t claim to be very well versed in it, but it seems to be a research program in its early days. I think that by now, even its practitioners are beginning to realise that simply asking people, outside of any particular context, about their “intuitions” about some concept of philosophical interest is not really going to be informative since without any philosophical background to the question, the respondents themselves can’t really know just what question they are being asked to answer, what their responses are responses to. There are just too many different things that can be meant by a question like, “‘Was such-and-such an action intentional or not?”, for example. And without further discussion or further analysis, the experimenters themselves can’t know what answers they are being given by the respondents. It’s not good data. So I can imagine experimental philosophy evolving in a way to account for this, and starting to include some philosophical background to the investigation, perhaps even some philosophical history, to provide the needed context to the particular intuiti...
epistemology has little or nothing to do with how untrained people gain confidence in their beliefs as knowledge, etc.
Epistemology is about how to acquire beliefs correctly. How untrained people actually acquire beliefs is some kind of social science. Just like rocketry is distinct from investigating how untrained people imagine rockets work.
The examples involving killing people seem like good examples, but the others seem like they could be predicated on disagreements about semantics rather than, say, disagreements about anticipated experiences (or utility functions, I guess). Words would need to be tabooed before I would trust those examples.
All of these examples are, in fact, explicitly about semantics. They are thought experiments mean to elicit our intuitions about the concepts of knowledge, moral rightness, etc.
For one thing, we would never assume that people of all kinds would share our intuitions.
Isn't this kind of an obvious conclusion ? The entire science of sociology was developed to address it, as far as I understand.
Is there really any kind of a serious debate in modern philosophy circles regarding whether or not our personal intuitions can be generally trusted ?
Is there really any kind of a serious debate in modern philosophy circles regarding whether or not our personal intuitions can be generally trusted?
Yes! The Experimental Philosophy: An Introduction book I linked to is a very brief, up-to-date summary of that debate. The debate over intuitions is one of the hottest in philosophy today, and has been since about 1998.
"Experimental Philosophy" sounds almost like an oxymoron. If it was really "experimental", it would be science, not philosophy.
Neither Philosophy nor Science are clearly delimited concepts that can be defined by a short sentence; like a lot of categories they are fuzzy and may overlap. Some activities called "doing science" are not experimental (abstract Math), and some experimental activities are not usually called "science" (testing a video game).
For one thing, we would never assume that people of all kinds would share our intuitions.
You write this like it's an original insight and not a problem that has been taken seriously by every philosopher who ever wrote seriously about ethics or meta-ethics.
Is believing in shared intuitions a result of reading philosophy, or is it just that intuitions feel like truths?
It's not obvious what the "bad habits" might be, and what they are bad relative to.
Examples of bad habits often picked up from reading too much philosophy: arguing endlessly about definitions, or using one's own intuitions as strong evidence about how the external world works. These are bad habits relative to, you know, not arguing endlessly about definitions, and using science to figure out how the world works.
You mention naturalism as a "bad habit" for using science to understand the world?
No, he doesn't (which is why I downvoted this comment, BTW). Luke says that even naturalistic philosophers exhibit these bad habits. He does not say that naturalism is a bad habit, or that it's a bad habit because it uses science to understand the world.
The point is very well-made. But it's not a philosophy-specific one. Mathematicians with a preferred ontology or axiomatization, theoretical physicists with a preferred nonstandard model or QM interpretation, also have to face up to the fact that neither intuitiveness nor counter-intuitiveness is a credible guide to truth — even in cases where there is no positive argument contesting the intuition. Some account is needed for why we should expect intuitions in the case in question to pick out truths.
Does it matter how much the diversity correlates with gender, society, etc.? If they're basing it on the fact that our intuitions are shared, and they aren't shared, what difference does it make if our gender is shared?
Bit of an implied false dichotomy, or at least an uncharitable reading.
You should get near universal agreement for stating that our intuitions are not strictly universally shared. Even the relevant quote you used qualified the "universally shared" with a "more or less".
Since we do share a cognitive architecture with many common elements, we should expect that - analogous to our various utility functions for which we surmise the existence of a CEV - there is a CEV-concept-analogon usable for philosophical intuitions, a sort of CEI. Whe...
Philosophy isn't the only discipline that uses intuition to adjudicate between theories. Even physicists rely on intuitive notions of "simplicity" when arguing for one model over another.
Following the sequence link at the top, I found this similar post, which has an impressive list of references. You include there this paper by Timothy Williamson. It seems to me an oversight you don't mention the paper's argument at all, as it's a sustained critique of the position you're representing.
The basic idea is that the kind of doubts about intuitions you raise are relevantly similar to more familiar forms of philosophical scepticism (scepticism about the external world, etc). I understand Williamson sees a dilemma: either they are mistaken for the...
In fairness there are potential issues here with signalling and culture. Although people might profess to believe X, in reality X just might be a more common type of cached knowledge, or X might be something that they say because they think it is socially useful, or as a permutation of those two they might have conditioned themselves to believe in X. Or, perhaps they interpret the meaning of "X" differently than others do, but they really mean the same thing underneath.
I think there should be a distinction between types of intuitions, or at least...
For one thing, we would never assume that people of all kinds would share our intuitions.
Here are some circumstances where we should:-
First we define "intuition" as a basic idea or principle that we need , and which can't be derived from anything else.
Secondly, we further stipulate that intutions must be shared.
Thirdly, we use empirical philosophy to reject any purported intuitions that don't meet the last criterion.
Fourthly: If the result is a non-empty set, we should accept that there are shared intuitions.
I prefer to think of 'abstract' as 'not spatially extended or localized.'
I prefer to think of it as anything existing at least partly in mind, and then we can say we have an abstraction of an abstraction or that something something is more abstract (something from category theory being a pure abstraction, while something like the category "dog" being less abstract because it connects with a pattern of atoms in reality). By their nature, abstractions are also universals, but things that actually exist like the bee hive in front of me aren't particulars at the concrete level. The specific bee hive in my mind that I'm imagining is a particular, or the "bee hive" that I'm seeing and interpreting into a bee hive in front of me is also a particular, but the bee hive is just a "pattern" of atoms.
Is that a fair summary?
I think that you're stuck in noun-land while I'm in verb-land, but I don't think noun-land is concrete (it's an abstraction).
What's relevant is whether it's useful to have separate concepts of 'the practice of science' vs. 'professional science,' the former being something even laypeople can participate in by adopting certain methodological standards. I think both concepts are useful. You seem to think that only 'professional science' is a useful concept, at least in most cases. Is that a fair summary?
Framing those concepts in terms of usefulness isn't helpful, I think. I'd simply say the laypeople are doing something different unless they're contributing to our body of knowledge. In which case, science as it is requires that those laypeople interact with science as it is (journals and such).
Counterfactuals don't make sense if you think of things as they are?
No, I mean thinking of someone as being scientific doesn't make sense if you think of science as it is because e.g. the sixth grader at the science fair that we all "scientific" isn't interacting with science as it is. We're taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.
I'm suggesting that many of those clustered properties, in particular many of the ones we most care about when we promote and praise things like 'science' and 'naturalism,' can occur in isolated individuals.
We can imagine an immortal human being on another planet replicating everything science has done on Earth thus far. So, yes I think it can occur in isolated individuals, but that's only because the individual has taken on everything that science is and not some like "carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance."
If I'm going to apply an abstraction to what I praise in science to individuals, it's not "being scientific" or "doing science", it's "working with feedback." It's what programmers do, it's what engineers do, it's what mathematicians, it's what scientists do, it's what people that effectively lose weight do, and so on. It's the kernel of thought most conducive to progress in any area.
Maybe we're just not approaching the problem at the same levels. When I ask about what the optimal way is to define our concepts, I'm trying to define them in a way that allows us to consistently ..
I think we are approaching the problem at the same level. I think I have optimally defined the concepts, and I think "behave in a way that predictably makes you better and better at doing good stuff" is what needs to be communicated and not "science: carefully collecting empirical data, and carefully reasoning about its predictive and transparently ontological significance." If we're going to add more content, then we should talk about how to effectively measure self-improvement, how to get solid feedback and so on. With that knowledge, I think a bunch of kids working together could rebuild science from the ground up.
If, in some cataclysm, all of scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance... -- Feynman
I'd pass on how important "behave in a way that predictably makes you better and better at doing good stuff" is.
I prefer to think of it as anything existing at least partly in mind
That's problematic, first, because it leaves mind itself in a strange position. And second because, if mathematical platonism (for example) were true, then there would exist abstract objects that are mind-independent.
We're taking some essential properties we pattern match in science as it is, and then we abstract them, and then we apply them by pattern matching.
You seem to be assuming the pattern-matching of this sort is a vice. If it's useful to mark the pattern in question, and we...
Consider these two versions of the famous trolley problem:
Here it is: a standard-form philosophical thought experiment. In standard analytic philosophy, the next step is to engage in conceptual analysis — a process in which we use our intuitions as evidence for one theory over another. For example, if your intuitions say that it is "morally right" to throw the switch in both cases above, then these intuitions may be counted as evidence for consequentialism, for moral realism, for agent neutrality, and so on.
Alexander (2012) explains:
In particular, notice that philosophers do not appeal to their intuitions as merely an exercise in autobiography. Philosophers are not merely trying to map the contours of their own idiosyncratic concepts. That could be interesting, but it wouldn't be worth decades of publicly-funded philosophical research. Instead, philosophers appeal to their intuitions as evidence for what is true in general about a concept, or true about the world.
In this sense,
But anyone with more than a passing familiarity with cognitive science might have bet in advance that this basic underlying assumption of a core philosophical method is... incorrect.
For one thing, philosophical intuitions show gender diversity. Consider again the Stranger and Child versions of the Trolley problem. It turns out that men are less likely than women to think it is morally acceptable to throw the switch in the Stranger case, while women are less likely than men to think it is morally acceptable to throw the switch in the Child case (Zamzow & Nichols 2009).
Or, consider a thought experiment meant to illuminate the much-discussed concept of knowledge:
When presented with this vignette, only 41% of men say that Peter "knows" there is a watch on the table, while 71% of women say that Peter "knows" there is a watch on the table (Starman & Friedman 2012). According to Buckwalter & Stich (2010), Starmans & Friedman ran another study using a slightly different vignette with a female protagonist, and that time only 36% of men said the protagonist "knows," while 75% of women said she "knows."
The story remains the same for intuitions about free will. In another study reported in Buckwalter & Stich (2010), Geoffrey Holtman presented subjects with this vignette:
In this study, only 35% of men, but 63% of women, said a person in this world could be free to choose whether or not to murder someone.
Intuitions show not only gender diversity but also cultural diversity. Consider another thought experiment about knowledge (you can punch me in the face, later):
Only 26% of Westerners say that Bob "knows" that Jill drives an American car, while 56% of East Asian subjects, and 61% of South Asian subjects, say that Bob "knows."
Now, consider a thought experiment meant to elicit semantic intuitions:
When presented with this vignette, East Asians are more likely to take the "descriptivist" view of reference, believing that John "is referring to" Schmidt — while Westerners are more likely to take the "causal-historical" view, believing that John "is referring to" Gödel (Machery et al. 2004).
Previously, I asked:
For one thing, we would never assume that people of all kinds would share our intuitions.
Next post: Philosophy Needs to Trust Your Rationality Even Though It Shouldn't
Previous post: Living Metaphorically