In the case you describe, the "HSC content" is just that Jesus is magic. So there's no argument being offered at all. Now, if they offer an actual argument, from some other p to the conclusion that Jesus is magic, then we can assess this argument like any other. How the arguer came to believe the original premise p is not particularly relevant. What you call the "defeater critique", I call the genetic fallacy.
It's true that an interlocutor is never going to be particularly moved by an argument that starts from premises he doesn't ac...
Yes, that's the idea. I mean, (2) is plausibly true if the "because" is meant in a purely causal, rather than rationalizing, sense. But we don't take the fact that we stand in a certain psychological relation to this content (i.e., intuiting it) to play any essential justifying role.
Thanks for following up on this issue! I'm looking forward to hearing the rest of your thoughts.
I'm not sure what you have in mind here. We need to distinguish (i) the referent of a concept from (ii) its reference-fixing "sense" or functional role. The way I understood your view, the reference-fixing story for moral terms involves our (idealized) desires. But the referent is "rigid" in the sense that it's picking out the content of our desires: the thing that actually fills the functional role, rather than the role-property itself.
Since our desires typically aren't themselves about our desires, so it will turn out, on this stor...
This all does sound good to me; but, is there a way to say the above while tabooing "reference" and avoiding talk of things "referring" to other things? Reference isn't ontologically basic, so what does it reduce to?
Basically, the main part that would worry me is a phrase like, "there's a story to be told about how our moral concepts came to pick out these particular worldly properties" which sounds on its face like, "There's a story to be told about how successorship came to pick out the natural numbers" whereas wh...
Correct. Eliezer has misunderstood rigid designation here.
Jonathan Ichikawa, 'Who Needs Intuitions'
Elizabeth Harman, 'Is it Reasonable to “Rely on Intuitions” in Ethics?
Timothy Williamson, 'Evidence in Philosophy', chp 7 of The Philosophy of Philosophy.
The debate over intuitions is one of the hottest in philosophy today
But it -- at least the "debate over intuitions" that I'm most familiar with -- isn't about whether intuitions are reliable, but rather over whether the critics have accurately specified the role they play in traditional philosophical methodology. That is, the standard response to experimentalist critics (at least, in my corner of philosophy) is not to argue that intuitions are "reliable evidence", but rather to deny that we are using them as evidence at all. On thi...
And this responds to what I said... how?
I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they're all returned. From an outside standpoint, this agent's mental bucket is meaningful because there's a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and th...
It's a nice parable and all, but it doesn't seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn't have to be anything "special" about the items we make use of in this way, except that we intend to so use them. Such "derivative intentionality" is not particularly difficult to explain (nor is the weak form of "natural intentionality" in which smoke "means" fire, tree rings "signify" age, etc.). The big question is...
This is somewhat absurd
More than that, it's obviously incoherent. I assume your point is that the same should be said of zombies? Probably reaching diminishing returns in this discussion, so I'll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here. Even those who want to deny that zombies are metaphysically possible generally concede that the concept is logically coherent.
Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical.
Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn't yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it ...
I'm not sure I follow you. Why would you need to analyse "thinking" in order to "get a start on building AI"? Presumably it's enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it's really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!
I couldn't help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...
Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, "Grass is green", I mean that grass is green. I conjure in my mind's eye a mental image of blades of grass, and their colour, in the image, ...
You can probably give a functionalist analysis of computation. I doubt we can reductively analyse "thinking" (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like "qualia") where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.
But what are brains thinking, if not thoughts?
Right, according to epiphenomenalists, brains aren't thinking (they may be computing, but syntax is not semantics).
If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe?
Our thoughts are (like qualia) what we are most directly acquainted with. If we didn't have them, there would be no "we" to "tell" anything. We only need causal connections to put us in contact with the world beyond our minds.
Meaning doesn't seem to be a thing in the way that atoms and qualia are, so I'm doubtful that the causal criterion properly applies to it (similarly for normative properties).
(Note that it would seem rather self-defeating to claim that 'meaning' is meaningless.)
In my experience, most philosophers are actually pretty motivated to avoid the stigma of "epiphenomenalism", and try instead to lay claim to some more obscure-but-naturalist-friendly label for their view (like "non-reductive physicalism", "anomalous monism", etc.)
FWIW, my old post 'Zombie Rationality' explores what I think the epiphenomenalist should say about the worry that "the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips [talk about consciousness]"
One point to flag is that from an epiphenomenalist's perspective, mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts (which, on this view, essentially involve qualia).
Another thing to flag is that...
Nope. Epiphenomenalism is motivated by the thought that you could (conceivably, in a world with different laws from ours) have the same bundles of neurons without any consciousness. You couldn't conceivably have the same bundles of trees not be a forest.
Did this ever happen? (If so, updating the OP with links would be very helpful.)
Thanks, that's helpful. Two (related) possible replies for the afterlife believer:
(1) The Y-component is replaceable: brains play the Y role while we're alive, but we get some kind of replacement device in the afterlife (which qualifies as "us", rather than a "replica of us", due to persisting soul identity).
(2) The brain is only needed for physical expressions of mentality ("talking", etc.), and we revert to purely non-physical mental functioning in the afterlife.
These are silly views, of course, but I'm not yet convinced tha...
Did you miss the "N.B." at the end of my post?
I agree that the soul hypothesis is not generally worth taking seriously. What I'm denying is that the existence of brain damage is good evidence for this.
Well... the existence of brain damage, in and of itself, is not evidence for this, I agree.
That is, if I lived in a world where (for example) brain damage existed but cognitive impairment didn't follow from it, in much the same sense that skeletal damage does not result in cognitive impairment in the actual world, the mere existence of brain damage would not tell us much that's relevant to the soul hypothesis one way or the other. (And, relatedly, in the real world I don't think the existence of skeletal damage is good evidence for or against the soul hyp...
That's surely going to depend on the details of the non-naturalist view. Epiphenomenalism, for example, makes all the same empirical predictions as physicalism. (Though it might be harder to combine with a "soul" view -- it goes more naturally with property dualism than substance dualism.)
But even Cartesian Interactionists, who see the brain as an "intermediary" between soul and body, should presumably expect brain damage to cause the body to be less responsive to the soul (just as in the radio analogy).
Or are you thinking of "no...
The tooth fairy example gets a variety of responses
Seriously? I've never heard anyone insist that the tooth fairy really exists (in the form of their mother). It would seem most contrary to common usage (in my community, at least) to use 'Tooth Fairy' to denote "whoever replaced the tooth under my pillow with a coin". The magical element is (in my experience) treated as essential to the term and not a mere "connotation".
I've heard of the saying you mention, but I think you misunderstand people when you interpret it literally. My ...
No, you learned that the tooth fairy doesn't exist, and that your mother was instead responsible for the observable phenomena that you had previously attributed to the tooth fairy.
(It's a good analogy though. I do think that claiming that morality exists "as a computation" is a lot like claiming that the tooth fairy really exists "as one's mother".)
I'm not arguing for moral realism here. I'm arguing against metaethical reductionism, which leaves open either realism OR error theory.
For all I've said, people may well be mistaken when they attribute normative properties to things. That's fine. I'm just trying to clarify what it is that people are claiming when they make moral claims. This is conceptual analysis, not metaphysics. I'm pointing out that what you claim to be the meaning of 'morality' isn't what people mean to be talking about when they engage in moral discourse. I'm not presupposing t...
Purported debates about the true meaning of "ought" reveal that everyone has their own balancing equation, and the average person thinks all others are morally obliged by objective morality to follow his or her equation.
You're confusing metaethics and first-order ethics. Ordinary moral debates aren't about the meaning of "ought". They're about the first-order question of which actions have the property of being what we ought to do. People disagree about which actions have this property. They posit different systematic theories (o...
That asserting there are moral facts is incompatible with the fact that people disagree about what they are?
No, I think there are moral facts and that people disagree about what they are. But such substantive disagreement is incompatible with Eliezer's reductive view on which the very meaning of 'morality' differs from person to person. It treats 'morality' like an indexical (e.g. "I", "here", "now"), which obviously doesn't allow for real disagreement.
Compare: "I am tall." "No, I am not tall!" Such ...
What would you say to someone who does not share your intuition that such "objective" morality likely exists?
I'd say: be an error theorist! If you don't think objective morality exists, then you don't think that morality exists. That's a perfectly respectable position. You can still agree with me about what it would take for morality to really exist. You just don't think that our world actually has what it takes.
One related argument is the Open Question Argument: for any natural property F that an action might have, be it promotes my terminal values, or is the output of an Eliezerian computation that models my coherent extrapolated volition, or whatever the details might be, it's always coherent to ask: "I agree that this action is F, but is it good?"
But the intuitions that any metaethics worthy of the name must allow for fundamental disagreement and fallibility are perhaps more basic than this. I'd say they're just the criteria that we (at least, many ...
I'd say they're just the criteria that we (at least, many of us) have in mind when insisting that any morality worthy of the name must be "objective", in a certain sense.
What would you say to someone who does not share your intuition that such "objective" morality likely exists?
My main problem with objective morality is that while it's hard to deny that there seem to be mind-independent moral facts like "pain is morally bad", there doesn't seem to be enough such facts to build an ethical system out of them. What natural ph...
The part about computation doesn't change the fundamental structure of the theory. It's true that it creates more room for superficial disagreement and fallibility (of similar status to disagreements and fallibility regarding the effective means to some shared terminal values), but I see this as an improvement in degree and not in kind. It still doesn't allow for fundamental disagreement and fallibility, e.g. amongst logically omniscient agents.
(I take it to be a metaethical datum that even people with different terminal values, or different Eliezerian &...
malice implies poor motivations. Rather, the egalitarian instinct appears to be natural to most people.
Why the "rather"? How 'natural' an instinct is implies nothing about its moral quality.
It's not entirely clear what you're asking. Two possibilities, corresponding to my above distinction, are:
(1) What (perhaps more general) normatively significant feature is possessed by [saving lives for $500 each] that isn't possessed by [saving mosquitoes for $2000 each]? This would just be to ask for one's fully general normative theory: a utilitarian might point to the greater happiness that would result from the former option. Eventually we'll reach bedrock ("It's just a brute fact that happiness is good!"), at which point the only remain...
People claim all sorts of justifications for 'ought' statements (aka normative statements).
You still seem to be conflating justification-giving properties with the property of being justified. Non-naturalists emphatically do not appeal to non-natural properties to justify our ought-claims. When explaining why you ought to give to charity, I'll point to various natural features -- that you can save a life for $500 by donating to VillageReach, etc. It's merely the fact that these natural features are justifying, or normatively important, which is non-natural.
Thanks, this is helpful. I'm interested in your use of the phrase "source of normativity" in:
The only source of normativity I think exists is the hypothetical imperative
This makes it sound like there's a new thing, normativity, that arises from some other thing (e.g. desires, or means/ends relationships). That's a very realist way of talking.
I take it that what you really want to say something more like, "The only kind of 'normativity'-talk that's naturalistically reducible and hence possibly true is hypothetical imperatives -- when th...
Thanks for this reply. I share your sense that the word 'moral' is unhelpfully ambiguous, which is why I prefer to focus on the more general concept of the normative. I'm certainly not going to stipulate that motivational internalism is true of the normative, though it does seem plausible that there's something irrational about someone who acknowledges that they really ought (all things considered) to phi and yet fails to do so. (I don't doubt that it's possible for someone to form the judgment without any corresponding motivation though, as it's always p...
That doesn't really answer my question. Let me try again. There are two things you might mean by "mind dependent".
(1) You might just mean "makes some reference to the mind". So, for example, the necessary truth that "Any experience of red is an experience of colour" would also count as "mind-dependent" in this sense. (This seems a very misleading usage though.)
(2) More naturally, "mind dependent" might be taken to mean that the truth of the claim depends upon certain states of mind actually existing. But "pain is bad for people" (like my example above) does not seem to be mind-dependent in this sense.
Which did you have in mind?
As I argue elsewhere:
"Hypothetical imperatives thus reveal patterns of normative inheritance. But their highlighted 'means' can't inherit normative status unless the 'end' in question had prior normative worth. A view on which there are only hypothetical imperatives is effectively a form of normative nihilism -- no more productive than an irrigation system without any water to flow through it."
(Earlier in the post explains why hypothetical imperatives aren't reducible to mere empirical statements of a means-ends relationship.)
I tentatively favour...
I'm inclined not to write about moral non-naturalism because I'm writing this stuff for Less Wrong, where most people are physicalists
Physicalists could (like Mackie) accept the non-naturalist's account of what it would take for something to be genuinely normative, and then simply deny that there are any such properties in reality. I'm much more sympathetic to this hard-headed "error theory" than to the more weaselly forms of naturalism.
I was thinking of "fundamental" concepts as those that are most basic, and not reducible to (or built up out of) other, more basic, concepts. I do think that normative concepts are conceptually isolated, i.e. not reducible to non-normative concepts, and that's really the more relevant feature so far as the OQA is concerned. But by 'fundamental normative concept' I meant a normative concept that is not reducible to any other concepts at all. They are the most basic, or bedrock, of our normative concepts.
Just to clarify: By 'pain' I mean the hurtful aspect of the sensation, not the base sensation that could remain in the absence of its hurting.
In your first paragraph you describe people who take pain to be instrumentally useful in some circumstances, to bring about some other end (e.g. healing) which is itself good. I take no stand on that empirical issue. I'm talking about the crazy normative view that pain is itself (i.e. non-instrumentally) good.
Yes, I was imagining someone who thought that unmitigated pain and suffering was good for everyone, themselves included. Such a person is nuts, but hardly inconceivable.
It's not analytic that pain is bad. Imagine some crazy soul who thinks that pain is intrinsically good for you. This person is deeply confused, but their error is not linguistic (as if they asserted "bachelors are female"). They could be perfectly competent speakers of the english language, and even logically omniscient. The problem is that such a person is morally incompetent. They have bizarrely mistaken ideas about what things are good (desirable) for people, and this is a substantive (synthetic), not merely analytic, matter.
Perhaps the t...
If we taboo and reduce, then the question of "...but is it good?" is out of place. The reply is: "Yes it is, because I just told you that's what I mean to communicate when I use the word-tool 'good' for this discussion. I'm not here to debate definitions; I'm here to get something done."
I just wanted to flag that a non-reductionist moral realist (like myself) is also "not here to debate definitions". See my post on The Importance of Implications. This is compatible with thinking well of the Open Question Argument, if we t...
Tangentially:
facts about the well-being of conscious creatures are mind-dependent facts
How so? (Note that a proposition may be in some sense about minds without its truth value being mind-dependent. E.g. "Any experience of red is an experience of colour" is true regardless of what minds exist. I would think the same is true of, e.g., "All else equal, pain is bad for the experiencer.")
It's confusing that you use the word 'meta-ethics' when talking about plain first-order ethics.
Non-cognitivists, in contrast, think that moral discourse is not truth-apt.
Technically, that's not quite right (except for the early emotivists, etc.). Contemporary expressivists and quasi-realists insist that they can capture the truth-aptness of moral discourse (given the minimalist's understanding that to assert 'P is true' is equivalent to asserting just 'P'). So they will generally explain what's distinctive about their metaethics in some other way, e.g. by appeal to the idea that it's our moral attitudes rather than their contents that have a certain central explanatory role...
Depending on what you mean by 'direct access', I suspect that you've probably misunderstood. But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.
How do you know that "people think zombies are conceivable"? Perhaps you will respond that we can know our own beliefs through introspection, and the inferential chain must stop somewhere. My view is that the relevant chain is merely like so:
zombies are conceivable => physicalism is false
I claim that we may non-inferentially know some non-psychological facts, when our beliefs in said facts meet the conditions for knowledge (exactly what these are is of course controversial, and not something we can settle in this comment thread).
It's not unusual to count "thwarted aims" as a positive bad of death (as I've argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person's thwarted ends).