As Jack mentioned and as Eliezer repeatedly said, even if a certain question does not make sense, the meta-question "why do people think that it makes sense?" nearly always makes sense. So, to avoid going insane, you can approach your ethics courses as "what thought process makes people make certain statements about ethics and morality?". Admittedly, this altered question belongs in cognitive science, rather than in ethics or philosophy, but your professors likely won't notice the difference.
Further, I happen to be a philosophy student right now, and I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt" that is, the focus seems to be on defining terms over and over again, as opposed to taking account of real effects of moral ideas in the actual world.
Second, how can I go about my ethics courses without going insane?
Good luck. Nearly everything I've seen written on morality is horribly wrong. I took a few ethics classes, and they are mostly junk. Maybe things are better in proper philosophy, but I doubt it.
as well as some of nyan's posts, but I felt even more confused afterwards.
That's worrying. Any particulars?
a guide as to which reductionist moral theories approximate what LW rationalists tend to think are correct.
If you mean things like "utilitarianism" and such, don't bother, no one has come up with one that works. I think the best approach is to realize that moral philosophy is a huge problem that hasn't been solved and no one knows how to solve (I'm "working" on it, as are many others), and all "solutions" right now are jumping the gun, and involve fundamental confusi...
Sorry if this seems overly aggressive, I am perhaps wrongfully frustrated right now.
It doesn't come across that way (to me at least). While you are being direct and assertive your expressions are all about you, your confusion, your goals and your experience. If you changed the emphasis away from yourself and onto the confusion or wrongness of others and used the same degree of assertiveness it would be a different matter entirely.
I'm a moral non-realist and for that reason I find (and when in college- found) normative moral theories to be really silly. Just as a class on theology seems pretty silly to someone who doesn't believe in God so does normative moral theory to someone who doesn't think there is anything real to describe in normative theory. But I think such courses can still be productive if you translate all the material in natural/sociological terms. I.e. it can still be interesting to learn how people think about "God"-- not the least of which is that God bare...
no one has a robust and definitive theory of normative ethics-- and one should be extremely skeptical of anyone who claims to have.
.
one should
Tee hee.
A few thoughts, hopefully useful for you:
Deontological morality is simply an axiom. "You should do X!" End of discussion.
If you want to continue the discussion, for example by asking "why?" (why this specific axiom, and not any other), you are outside of its realm. The question does not make sense for a deontologist. At best they will provide you a circular answer: "You should do X, because you should do X!" An eloquent deontologist can make the circle larger than this, if you insist.
On the other hand, any other morality could...
I've taken introductory philosophy class and my experience was somewhat similar. I remember my head getting messed with somewhat in the short term, as not so worthwhile lines of thought chewed up more of my brainpower than they probably deserved, but I think in the long term this hasn't stuck with me. I ended up coping with that class the same way I used to cope with sunday school: by using it as an a opportunity to note real examples of the different failure modes I've read about, in real time. I don't think you have to worry too much about being corrupted.
which reductionist moral theories approximate what LW rationalists tend to think are correct
We tend to like Harry Frankfurt.
I realize that my ideas and questions can themselves already be "diseased". I'd like to try to be open to re-learn, though I understand the process may be painful. If you decide to help me, I only ask that you can handle the frustration of trying to teach someone who knows bad tricks.
(I have had the excruciating experience of trying to teach students who have learned the wrong previous skills. Granted, this was in a physical, martial arts perspective, but it strikes me that mental "muscle memory" is just as harmful and stubborn, if not more so, than actual muscle memory).
I found myself extremely confused by nyan's recent posts on the matter, but I think I understood the other sequences you mentioned quite well (particularly Luke's.) What, specifically, do you find yourself confused about?
As an aside, I'm also currently studying philosophy, and although I started out with a heavy focus on moral-phil, I've steadily found myself drawn to the more 'techy' fields and away from things like ethics...
It seems like you are conflating human desires with morality.
I am probably using the words incorrectly, because I don't know how philosophers define them, or even whether they can agree on a definition. I essentially used "morality" to mean "any system which says what you should", and added an observation that if you take literally any such system, most of them will not fit your intuition of morality. Why? Because they recommend things you find repulsive or just stupid. But this is a fact about you or about humans in general, so in order to find "a system which says what you should, and it makes sense and is not repulsive", you must study humans. Specifically, human desires.
In other words, I define "morality" as "a system of 'shoulds' that humans can agree with".
Paperclip maximizers, capable of reflexivity and knowing game theory, could derive their own "system of 'shoulds'" they could agree with. It could include rules like "don't destroy your neighbor's two paperclips just to build one yourself", which would be similar to our morality, but that's because the game theory is the same.
But it would be game theory plus paperclip-maximizer desires, so even if it would contain some concepts of friendship and non-violence (cooperating with each other in the iterated Prisonner's Dilemma's) which would make all human hippies happy, when given a choice "sending all sentient beings into eternal hell of maximum suffering in exchange for a machine that tiles the universe with the paperclips" would seem to them like a great idea. Don't ever forget it when dealing with paperclip maximizers.
what happens if [...] I am a psychopath now and realize I may disposed to become a lover of people later?
If I am a psychopath now, I don't give a **** about morality, do I? So I decide according to whatever psychopaths consider important. (I guess it would be according to my whim at the moment.)
In other words, I define "morality" as "a system of 'shoulds' that humans can agree with".
If you want a name for your position on this (which, as far as I can tell, is very well put,) a suitable philosophical equivalent is Moral Contractualism, a la Thomas Scanlon in "What We Owe To Each Other." He defines certain kinds of acts as morally wrong thus:
An act is wrong if and only if any principle that permitted it would be one that could reasonably be rejected by people moved to find principles for the general regulation of behaviour that others, similarly motivated, could not reasonably reject.
Hi everyone,
If this has been covered before, I apologize for the clutter and ask to be redirected to the appropriate article or post.
I am increasingly confused about normative theories. I've read both Eliezer's and Luke's meta ethics sequences as well as some of nyan's posts, but I felt even more confused afterwards. Further, I happen to be a philosophy student right now, and I'm worried that the ideas presented in my ethics classes are misguided and "conceptually corrupt" that is, the focus seems to be on defining terms over and over again, as opposed to taking account of real effects of moral ideas in the actual world.
I am looking for two things: first, a guide as to which reductionist moral theories approximate what LW rationalists tend to think are correct. Second, how can I go about my ethics courses without going insane?
Sorry if this seems overly aggressive, I am perhaps wrongfully frustrated right now.
Jeremy