You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanArmak comments on Philosophy professors fail on basic philosophy problems - Less Wrong Discussion

16 Post author: shminux 15 July 2015 06:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 16 July 2015 03:06:10PM 11 points [-]

Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

Mathematicians aren't biased by being told "I colored 200 of 600 balls black" vs. "I colored all but 400 of 600 balls black", because the question "how to color the most balls" has a correct answer in the model used. This is true even if the model is unique to the mathematician answering the question: the most important thing is consistency.

If a moral theory can't prove the correctness of an answer to a very simple problem - a choice between just two alternatives, trading off in clearly morally significant issues (lives), without any complications (e.g. the different people who may die don't have any distinguishing features) - then it probably doesn't give clear answers to most other problems too, so what use is it?

Comment author: [deleted] 16 July 2015 03:11:00PM 0 points [-]

Moral theories predict feelings, mathemathical theories predict different things. Moral philosophy assumes you already know genocide is wrong and it tries to figure out how your subconscious generates this feeling: http://lesswrong.com/lw/m8y/dissolving_philosophy/

Comment author: DanArmak 16 July 2015 03:22:54PM 3 points [-]

Moral theories predict feelings

Are you saying that because people are affected by a bias, a moral theory that correctly predicts their feelings must be affected by the bias in the same way?

This would preclude (or falsify) many actual moral theories on the grounds that most people find them un-intuitive or simply wrong. I think most moral philosophers aren't looking for this kind of theory, because if they were, they would agree much more by now: it shouldn't take thousands of years to empirically discover how average people feel about proposed moral problems!

Comment author: [deleted] 17 July 2015 07:16:06AM *  0 points [-]

the same way

No - the feelings are not a truth-seeking device so bias is not applicable: they are part of the terrain.

it shouldn't take thousands of years to empirically discover how average people feel about proposed moral problems!

It is not like that they were working on it every day for thousands of years. I.e. in the Christian period it mattered more what god says about morals than how people feel about it. Fairly big gaps. There is a classical era and modern era, the two adds up to a few hundreds of years with all sorts of gaps.

IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest - yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.

Comment author: DanArmak 17 July 2015 12:06:00PM 0 points [-]

IMHO the core issue is that our moral feelings are inconsistent and this is why we need philosophy. If someone murders someone in a fit of rage, he still feels that most murders commited by most people are wrong, and maybe he regrets his own later on, but in that moment he did not feel it. Even the public opinion can have so wide mood swings that you cannot just reduce morality to a popularity contest - yet in essence it is so, but more of an abstract popularity contest. This is why IMHO philosophy is trying to algorithmize moral feelings.

So is philosophy trying to describe moral feelings, inconsistent and biased as they are? Or is it trying to propose explicit moral rules and convince people to follow them even when they go against their feelings? Or both?

If moral philosophers are affected by presentation bias, that means they aren't reasoning according to explicit rules. Are they trying to predict the moral feelings of others (who? the average person?)

Comment author: TheAncientGeek 17 July 2015 03:52:39PM *  0 points [-]

If their meta level reasoning , their actual job, hasnt told them which rules to follow, .or has told them not to follow rules, why should they follow rules?

Comment author: DanArmak 17 July 2015 04:44:45PM 0 points [-]

By "rules" I meant what the parent comment referred to as trying to "algorithmize" moral feelings.

Moral philosophers are presumably trying to answer some class of questions. These may be "what is the morally right choice?" or "what moral choice do people actually make?" or some other thing. But whatever it is, they should be consistent. If a philosopher might give a different answer every time the same question is asked of them, then surely they can't accomplish anything useful. And to be consistent, they must follow rules, i.e. have a deterministic decision process.

These rules may not be explicitly known to themselves, but if they are in fact consistent, other people could study the answers they give and deduce these rules. The problem presented by the OP is that they are in fact giving inconsistent answers; either that, or they all happen to disagree with one another in just the way that the presentation bias would predict in this case.

A possible objection is that the presentation is an input which is allowed to affect the (correct) response. But every problem statement has some irrelevant context. No-one would argue that a moral problem might have different answers between and 2 and 3 AM, or that the solution to a moral problem should depend on the accent of the interviewer. And to understand what the problem being posed actually is (i.e. to correctly pose the same problem to different people), we need to know what is and isn't relevant.

In this case, the philosophers act as if the choice of phrasing "200 of 600 live" vs. "400 of 600 die" is relevant to the problem. If we accepted this conclusion, we might well ask ourselves what else is relevant. Maybe one shouldn't be a consequentialist between 2 and 3 AM?

Comment author: TheAncientGeek 17 July 2015 06:54:50PM *  1 point [-]

You haven't shown that they are producing inconsistent theories in their published work. The result only shows that, like scientists, individual philosophers can't live up to their own cognitive standards in certain situations.

Comment author: DanArmak 17 July 2015 09:11:12PM 0 points [-]

This is true. But it is significant evidence that they are inconsistent in their work too, absent an objective standard by which their work can be judged.

Comment author: Luke_A_Somers 16 July 2015 09:43:40PM 0 points [-]

It can be hard to find a formalization of the empirical systems, though. Especially since formalizing is going to be very complicated and muddy in a lot of cases. That'll cover a lot of '... and therefore, the right answer emerges'. Not all, to be sure, but a fair amount.

Comment author: Creutzer 18 July 2015 07:41:17AM 1 point [-]

Moral theories predict feelings

No. This is what theories of moral psychology do. Philosophical ethicists do not consider themselves to be in the same business.