As Jack mentioned and as Eliezer repeatedly said, even if a certain question does not make sense, the meta-question "why do people think that it makes sense?" nearly always makes sense. So, to avoid going insane, you can approach your ethics courses as "what thought process makes people make certain statements about ethics and morality?". Admittedly, this altered question belongs in cognitive science, rather than in ethics or philosophy, but your professors likely won't notice the difference.
Further, I happen to be a philosophy student right now, and I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt" that is, the focus seems to be on defining terms over and over again, as opposed to taking account of real effects of moral ideas in the actual world.
Second, how can I go about my ethics courses without going insane?
Good luck. Nearly everything I've seen written on morality is horribly wrong. I took a few ethics classes, and they are mostly junk. Maybe things are better in proper philosophy, but I doubt it.
as well as some of nyan's posts, but I felt even more confused afterwards.
That's worrying. Any particulars?
a guide as to which reductionist moral theories approximate what LW rationalists tend to think are correct.
If you mean things like "utilitarianism" and such, don't bother, no one has come up with one that works. I think the best approach is to realize that moral philosophy is a huge problem that hasn't been solved and no one knows how to solve (I'm "working" on it, as are many others), and all "solutions" right now are jumping the gun, and involve fundamental confusi...
Sorry if this seems overly aggressive, I am perhaps wrongfully frustrated right now.
It doesn't come across that way (to me at least). While you are being direct and assertive your expressions are all about you, your confusion, your goals and your experience. If you changed the emphasis away from yourself and onto the confusion or wrongness of others and used the same degree of assertiveness it would be a different matter entirely.
I'm a moral non-realist and for that reason I find (and when in college- found) normative moral theories to be really silly. Just as a class on theology seems pretty silly to someone who doesn't believe in God so does normative moral theory to someone who doesn't think there is anything real to describe in normative theory. But I think such courses can still be productive if you translate all the material in natural/sociological terms. I.e. it can still be interesting to learn how people think about "God"-- not the least of which is that God bare...
no one has a robust and definitive theory of normative ethics-- and one should be extremely skeptical of anyone who claims to have.
.
one should
Tee hee.
A few thoughts, hopefully useful for you:
Deontological morality is simply an axiom. "You should do X!" End of discussion.
If you want to continue the discussion, for example by asking "why?" (why this specific axiom, and not any other), you are outside of its realm. The question does not make sense for a deontologist. At best they will provide you a circular answer: "You should do X, because you should do X!" An eloquent deontologist can make the circle larger than this, if you insist.
On the other hand, any other morality could...
I've taken introductory philosophy class and my experience was somewhat similar. I remember my head getting messed with somewhat in the short term, as not so worthwhile lines of thought chewed up more of my brainpower than they probably deserved, but I think in the long term this hasn't stuck with me. I ended up coping with that class the same way I used to cope with sunday school: by using it as an a opportunity to note real examples of the different failure modes I've read about, in real time. I don't think you have to worry too much about being corrupted.
which reductionist moral theories approximate what LW rationalists tend to think are correct
We tend to like Harry Frankfurt.
I realize that my ideas and questions can themselves already be "diseased". I'd like to try to be open to re-learn, though I understand the process may be painful. If you decide to help me, I only ask that you can handle the frustration of trying to teach someone who knows bad tricks.
(I have had the excruciating experience of trying to teach students who have learned the wrong previous skills. Granted, this was in a physical, martial arts perspective, but it strikes me that mental "muscle memory" is just as harmful and stubborn, if not more so, than actual muscle memory).
I found myself extremely confused by nyan's recent posts on the matter, but I think I understood the other sequences you mentioned quite well (particularly Luke's.) What, specifically, do you find yourself confused about?
As an aside, I'm also currently studying philosophy, and although I started out with a heavy focus on moral-phil, I've steadily found myself drawn to the more 'techy' fields and away from things like ethics...
A few thoughts, hopefully useful for you:
Deontological morality is simply an axiom. "You should do X!" End of discussion.
If you want to continue the discussion, for example by asking "why?" (why this specific axiom, and not any other), you are outside of its realm. The question does not make sense for a deontologist. At best they will provide you a circular answer: "You should do X, because you should do X!" An eloquent deontologist can make the circle larger than this, if you insist.
On the other hand, any other morality could be seen as an instance of deontological morality for a specific value of "X". For example "You should maximize the utility of the consequences of your choices" = consequentialism. (If you say that we should maximize the utility of consequences because of some Y, for example because it makes people happy, again the question is: why Y?)
So every normative morality has its axioms, and any evaluation of which axioms are better must already use some axioms. Even if we say that e.g. self-consistent axioms seem better than self-contradictory axioms, even that requires some axiom, and we could again ask: "why"?
There is no such thing as a mind starting from a blank slate and ever achieving anything other than a blank state, because... seriously, what mechanism would it use to make its first step? Same thing with morality: if you say that X is a reason to care about Y, you must already care about X, otherwise the reasoning will leave you unimpressed. (Related: Created Already In Motion.)
So it could be said that all moralities are axiomatic, and in this technical sense, all of them are equal. However, some of those axioms are more compatible with a human mind, so we judge them as "better" or "making more sense". It is a paradox that if we want to find a good normative morality, we must look at how human brains really work. And then if we find that human brains somehow prefer X, we can declare "You should do X" a good normative morality.
Please note that this is not circular. It does not mean "we should always do what we prefer", but rather "we prefer X; so now we forever fix this X as a constant; and we should do X even if our preferences later change (unless X explicitly says how our actions should change according to changes in our future preferences)". As an example -- let's suppose that my highest value is pleasure, and I currently like chocolate, but I am aware that my taste may change later. Then my current preference X is that I should eat what I like, whether that is a chocolate or something else. Even if today I can't imagine liking something else, I still wish to keep this option open. On the other hand, let's suppose I love other people, but I am aware that in a future I could accidentally become a psychopath who loves torturing people. Then my current preference X is that I should never torture people. I am aware of the possible change, but I disagree with it now. There is a difference between a possible development that I find morally acceptable, and a possible development that I find morally unacceptable, and that difference is encoded in my morality axiom X.
The preferences should be examined carefully; I don't know how to say it exactly, but even if I think I want something now, I may be mistaken. For example I can be mistaken of some facts, which can lead me to wrong conclusion about my preferences. So I would prefer a preferences-extraction process which would correct my mistakes and would instead select things I would prefer if I knew all the facts correctly and had enough intelligence to understand it all. (Related: Ideal Advisor Theories and Personal CEV.)
Summary: To have a normative morality, we need to choose an axiom. But an arbitrary axiom could result in a morality we would consider evil or nonsensical. To consider it good, we much choose an axiom reflecting what humans already want. (Or, for an individual morality, what the individual wants.) This reflection should assume more intelligence and better information than we already have.
Deontological morality is simply an axiom. "You should do X!" End of discussion.
This is not true. Deontological systems have modes of inference. e.g.
P1) You should not kill people P2) Sally is a person C) You should not kill Sally
would be totally legitimate to a deontologist
Hi everyone,
If this has been covered before, I apologize for the clutter and ask to be redirected to the appropriate article or post.
I am increasingly confused about normative theories. I've read both Eliezer's and Luke's meta ethics sequences as well as some of nyan's posts, but I felt even more confused afterwards. Further, I happen to be a philosophy student right now, and I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt" that is, the focus seems to be on defining terms over and over again, as opposed to taking account of real effects of moral ideas in the actual world.
I am looking for two things: first, a guide as to which reductionist moral theories approximate what LW rationalists tend to think are correct. Second, how can I go about my ethics courses without going insane?
Sorry if this seems overly aggressive, I am perhaps wrongfully frustrated right now.
Jeremy