You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Philosophy professors fail on basic philosophy problems - Less Wrong Discussion

16 Post author: shminux 15 July 2015 06:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 16 July 2015 12:47:13PM 4 points [-]

And the other issue is that overcoming those biases is regarded as all but impossible by experts in the field of cognitive bias.....but I guess that "philosophers imperfect rationalists, along with everybody else" isnt such a punchy headline,

Comment author: DanArmak 16 July 2015 02:55:34PM 3 points [-]

Whatever the reason, if they cannot overcome it, doesn't that make all their professional output similarly useless?

However, I don't agree with what you're saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

After all, mathematicians aren't confused by being told "I colored 200 of 600 balls black" and "I colored all but 400 of 600 balls black".

Comment author: TheAncientGeek 16 July 2015 08:18:16PM *  4 points [-]

Whatever the reason, if they cannot overcome it, doesn't that make all their professional output similarly useless?

If no one can overcome bias, does that make all their professional output useless? Do you want to buy "philosophers are crap" at the expense of "everyone is crap"?

However, I don't agree with what you're saying; overcoming these biases is very easy. Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

That's the consistency. What about the correctness?

Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it's algorithms all the way down.

After all, mathematicians aren't confused by being told "I colored 200 of 600 balls black" and "I colored all but 400 of 600 balls black

Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.

Comment author: DanArmak 17 July 2015 12:17:07PM 2 points [-]

If no one can overcome bias, does that make all their professional output useless? Do you want to buy "philosophers are crap" at the expense of "everyone is crap"?

No, for just the reason I pointed out. Mathematicians, "hard" scientists, engineers, etc. all have objective measures of correctness. They converge towards truth (according to their formal model). They can and do disprove wrong, biased results. And they certainly can't fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question. If such a thing happened, and if they cared about the question, they would arrive at the correct answer.

That's the consistency. What about the correctness?

Consistency is more important than correctness. If you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it.

Which would make mathematicians the logical choice to solve all real world problems....if only real world problems were as explicitly and unambiguous statable, as free indeterminism , as fee of incomplete information and mess, as math problems.

A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn't ever be contrary to simple logic.

Comment author: Lumifer 17 July 2015 06:45:10PM 2 points [-]

Consistency is more important than correctness.

I think I'm going to disagree with that.

Comment author: DanArmak 17 July 2015 09:09:50PM *  -1 points [-]

Why?

Comment author: Lumifer 18 July 2015 03:32:47AM 4 points [-]

Because correct results or forecasts are useful and incorrect are useless or worse, actively misleading.

I can use a theory which gives inconsistent but mostly correct results right now. A theory which is consistent but gives wrong results is entirely useless. And if you can fix an incorrect theory to make it right, in the same way you can fix an inconsistent theory to make it consistent.

Besides, it's trivially easy to generate false but consistent theories.

Comment author: TheAncientGeek 17 July 2015 03:30:21PM *  2 points [-]

No, for just the reason I pointed out. Mathematicians, "hard" scientists, engineers, etc. all have objective measures of correctness.

Within their domains.

They can and do disprove wrong, biased results. And they certainly can't fall prey to a presentation bias that makes them give different answers to the same, simple, highly formalized question.

So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don't recall hearing that result.

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn't ever be contrary to simple logic.

Even if it's too simple?

Comment author: TheAncientGeek 17 July 2015 05:28:29PM *  1 point [-]

Consistency is more important than correctness.

Consistency shouldn't be regarded as more important than correctness, in the sense that you check for consistency, and stop.

f you believe you theory is right, you may be wrong, and if you discover this (because it makes wrong predictions) you can fix it. But if you accept inconsistent predictions from your theory, you can never fix it..

But the inconsistency isnt in the theory, and, in all likelihood, they are not .running off an explicit theory ITFP.

Comment author: DanArmak 17 July 2015 04:54:12PM 0 points [-]

Within their domains.

Exactly. And if philosophers don't have such measures within their domain of philosophy, why should I pay any attention to what they say?

So when kahneman et al tested hard scientists foe presentation bias, they found them, out of the population, to be uniquely free from it? I don't recall hearing that result.

I haven't checked, but I strongly expect that hard scientists would be relatively free of presentation bias in answering well-formed questions (that have universally agreed correct answers) within their domain. Perhaps not totally free, but very little affected by it. I keep returning to the same example: you can't confuse a mathematician, or a physicist or engineer, by saying "400 out of 600 are white" instead of "200 out of 600 are black".

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?

A problem, or area of study, may require a lot more knowledge than that of simple logic. But it shouldn't ever be contrary to simple logic.

Even if it's too simple?

If moral philosophers claim that uniquely of all human fields of knowledge, their requires not just going beyond formal logic but being contrary to it, I'd expect to see some very extraordinary evidence. "We haven't been able to make progress otherwise" isn't quite enough; what are the results they've accomplished with whatever a-logical theories they've built?

Comment author: TheAncientGeek 17 July 2015 06:49:49PM -1 points [-]

Exactly. And if philosophers don't have such measures within their domain of philosophy, why should I pay any attention to what they say?

The critical question is whether they could have such measures.

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

What results has moral philosophy, as a whole, achieved in the long term? What is as universally agreed on as first-order logic or natural selection?

That's completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.

Comment author: DanArmak 17 July 2015 09:13:39PM 1 point [-]

The critical question is whether they could have such measures.

Do you mean they might create such measures in the future, and therefore we should keep funding them? But without such measures today, how do we know if they're moving towards that goal? And what's the basis for thinking it's achievable?

That's completely beside the point. The point is that you allow that the system cam outperform the individuals in the one case, but not the other.

Is there an empirical or objective standard by which the work of moral philosophers is judged for correctness or value, something that can be formulated explicitly? And if not, how can 'the system' converge on good results?

Comment author: [deleted] 19 July 2015 03:47:48AM -1 points [-]

You are not comparing like with like. You are saying that science as a whole, over the long term, is able to correct it's biases, but you know perfectly well that in the short term, bad papers got published. Interviewing individual philosophers isnt comparable to the long term, en masse behaviour of science,

Where is the evidence that philosophy, as a field, has converged towards correctness over time?

Comment author: TheAncientGeek 19 July 2015 08:20:03AM 1 point [-]

Where is the need for it? The question us whether philosophers are doing their jobs competently. Can you fail at something you don't claim to be doing? Do philosophers claim have The Truth?

Comment author: [deleted] 20 July 2015 01:21:35PM -1 points [-]

Do philosophers claim have The Truth?

That's basically what they're for, yes, and certainly they claim to have more Truth than any other field, such as "mere" sciences.

Comment author: TheAncientGeek 20 July 2015 09:07:04PM *  1 point [-]

Is that what they say?

ETA

Socrates rather famous said the opposite...he only knows that he does not know.

The claim that philosophers sometimes make is that you can't just substitute science for philosophy because philosophy deals with a wider range of problems. But that isnt the same as claiming to have The Truth about them all.

Comment author: [deleted] 19 July 2015 03:47:04AM 0 points [-]

Note that biases might affect the meta-level reasoning that leads to the choice of algorithm. Unless you think it's algorithms all the way down.

Of course it's algorithms all the way down! "Lens That Sees Its Flaws" and all that, remember?

Comment author: TheAncientGeek 19 July 2015 08:25:12AM 0 points [-]

How is a process of reasoning based on an infinite stack of algorithms concluded in a finite amount of time?

Comment author: jsteinhardt 19 July 2015 07:23:03PM 1 point [-]

You can stop recursing whenever you have sufficiently high confidence, which means that your algorithm terminates in finite time with probability 1, while also querying each algorithm in the infinite stack with non-zero probability.

Comment author: [deleted] 20 July 2015 01:21:05PM 0 points [-]

Bingo. And combining that with a good formalization of bounded rationality tells you how deep you can afford to go.

But of course, you're the expert, so you know that ^_^.

Comment author: Romashka 17 July 2015 05:40:53AM 0 points [-]

Re: everyone is crap

But that is not a problem. Iff everyone is crap, I want to believe that everyone is crap.

Comment author: TheAncientGeek 17 July 2015 04:02:48PM 1 point [-]

Its a problem, if you want to bash one particular group.

Comment author: Luke_A_Somers 16 July 2015 09:51:50PM 0 points [-]

If no one can overcome bias, does that make all their professional output useless?

My professional input does not depend on bias in moral (or similarly fuzzy) questions. As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.

These are rather different from how a philosopher can operate.

Comment author: TheAncientGeek 17 July 2015 08:36:15AM *  2 points [-]

My professional input does not depend on bias in moral (or similarly fuzzy) questions.

But that doesn't make philosophy uniquely broken. If anything it is the other way around: disciplines that deal with the kind of well-defined abstract problems where biases can't get a grip, are exceptional.

As for other biases, I definitively determine success or failure on a time scale ranging from minutes to weeks.These are rather different from how a philosopher can operate.

"Can operate" was carefully phrased. If the main role of philosophers is to answer urgent object level moral quandaries, then the OP would have pointed out a serious real world problem....but philosophers typically don't do that, they typically engage in long term meta level thought on a variety of topics,

Philosophers can operates in a way that approximates the OP scenario, for instance, when they sit on ethics committees. Of course, they sit alongside society's actual go-to experts on object level ethics, religious professionals, who are unlikely to be less biased.

Philosophers aren't the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.

Comment author: DanArmak 17 July 2015 12:19:50PM 2 points [-]

Philosophers aren't the most biased or most impactive people in society....worry about the biases of politicians, doctors, and financiers.

I can't dismiss politicians, doctors and financiers. I can dismiss philosophers, so I'm asking why should I listen to them.

Comment author: TheAncientGeek 17 July 2015 03:03:24PM 0 points [-]

You can dismiss philosophy, if it doesn't suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly. Dismissing philosophers without dismissing philosophy is dangerous, as it means you are doing philosophy without knowing how. You are unlikely to be less biased, whilst being likely to misunderstand questions, reinvent broken solutions, and so on. Consistently avoiding philosophy is harder than it seems. You are likely be making a philosophical claim when you say scientists and mathematicians converge on truth.

Comment author: DanArmak 17 July 2015 04:58:59PM 1 point [-]

You can dismiss philosophy, if it doesn't suit your purposes, but that is not at all the same as the original claim that philosophers are somehow doing their job badly

I didn't mean to dismiss moral philosophy; I agree that it asks important questions, including "should we apply a treatment where 400 of 600 survive?" and "do such-and-such people actually choose to apply this treatment?" But I do dismiss philosophers who can't answer these questions free of presentation bias, because even I myself can do better. Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP's suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don't believe that it's not representative merely because a PHD in moral philosophy sounds very wise.

Comment author: TheAncientGeek 17 July 2015 06:39:41PM 0 points [-]

I didn't mean to dismiss moral philosophy; I agree that it asks important questions, including "should we apply a treatment where 400 of 600 survive?" and "do such-and-such people actually choose to apply this treatment?" But I do dismiss philosophers who can't answer these questions free of presentation bias,

Meaning you dismiss their output, even though it isnt prepared under those conditions and is prepared under conditions allowing bias reduction, eg by cross checking.

because even I myself can do better.

Under the same conditions? Has that been tested?

Hopefully there are other moral philosophers out there who are both specialists and free of bias. The OP's suggestion that philosophers are untrustworthy obviously depends on how representative that survey is of philosophers in general. However, I don't believe that it's not representative merely because a PHD in moral philosophy sounds very wise.

Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?

Comment author: DanArmak 17 July 2015 09:09:37PM 0 points [-]

because even I myself can do better.

Under the same conditions? Has that been tested?

It hasn't been tested, but I'm reasonably confident in my prediction. Because, if I were answering moral dilemmas, and explicitly reasoning in far mode, I would try to follow some kind of formal system, where presentation doesn't matter, and where answers can be checked for correctness.

Granted, I would need some time to prepare such a system, to practice with it. And I'm well aware that all actually proposed formal moral systems go against moral intuitions in some cases. So my claim to counterfactually be a better moral philosopher is really quite contingent.

Scientists have been shown to have failings of their own, under similarly artificial conditions. Are you going to to reject scientists, because of their individual untrustworthiness...or trust the system?

Other sciences deal with human fallibility by having an objective standard of truth against which individual beliefs can be measured. Mathematical theories have formal proofs, and with enough effort the proofs can even be machine-checked. Physical, etc. theories produce empirical predictions that can be independently verified. What is the equivalent in moral philosophy?

Comment author: Luke_A_Somers 17 July 2015 03:08:45PM 0 points [-]

So in short, you are answering your rhetorical question with 'no', which rather undermines your earlier point - no, DanArmak did not 'prove too much'.

Comment author: TheAncientGeek 17 July 2015 03:46:02PM 0 points [-]

DanArmak did not 'prove too much'.

Shminux did.

Comment author: Luke_A_Somers 17 July 2015 07:50:49PM 0 points [-]

If you answer the rhetorical question as 'no' then no, Shminux didn't prove too much either.

Comment author: [deleted] 19 July 2015 03:45:55AM 1 point [-]

Just have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.

This is roughly the point where some bloody philosopher invokes Hume's Fork, mutters something about meta-ethics, and tells you to fuck off back to the science departments where you came from.

Comment author: gjm 16 July 2015 02:04:56PM 1 point [-]

One might reasonably hope that professional philosophers would be better reasoners than the population at large. That is, after all, a large fraction of their job.

Overcoming these biases completely may well be impossible, but should we really expect that years of training in careful thinking, plus further years of practice, on a population that's supposedly selected for aptitude in thinking, would fail to produce any improvement?

(Maybe we should, either on the grounds that these biases really are completely unfixable or on the grounds that everyone knows academic philosophy is totally broken and isn't either selecting or training for clearer more careful thinking. I think either would be disappointing.)

Comment author: [deleted] 19 July 2015 03:50:39AM 0 points [-]

Well, if they weren't explicitly trained to deal with cognitive biases, we shouldn't expect that they've magically acquired such a skill from thin air.