Hard philosophy problems to test people's intelligence?

-2 Post author: Solvent 15 February 2012 04:57AM

I'm looking for hard philosophical questions to give to people to gauge their skill at philosophy.

So far, I've been presenting people with Newcomb's problem and the Sleeping Beauty problem. I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.

What other problems should I use? 

Comments (35)

Comment author: ciphergoth 15 February 2012 08:15:59AM 9 points [-]

What query are you trying to hug?

Comment author: Solvent 15 February 2012 08:56:44PM 1 point [-]

I'm trying to test their philosophical ability. Some people immediately and intuitively notice bad arguments and spot good ones.

Comment author: ciphergoth 16 February 2012 05:46:42PM 4 points [-]

What decision rests on the outcome of your test?

Comment author: Zetetic 15 February 2012 09:34:04PM 4 points [-]

I think there's a problem with your thinking on this - people can spot patterns of good and bad reasoning. Depending on the argument, they may or may not notice a flaw in the reasoning for a wide variety of reasons. Someone who is pretty smart probably notices the most common fallacies naturally - they could probably spot at least a few while watching the news or listening to talk shows.

People who study philosophy are going to have been exposed to many more diverse examples of poor reasoning, and will have had practice identifying weak points and exploiting them to attack an argument. This increases your overall ability to dissolve or decompose arguments by increasing your exposure and by equipping you with a trick bag of heuristics. People who argue on well moderated forums or take part in discussions on a regular basis will likely also pick up some tricks of this sort.

However, there are going to be people who can dissolve one problem, but not another because they have been exposed to something sufficiently similar to one (and are thus probably have some cached details relevant to solving it) but not so for the other:

E.g. a student of logic will probably make the correct choice in the Wason Selection Task and may be able to avoid making the conjunction fallacy, but they may not two box because they fall into the CDT reasoning trap. However, a student of the sciences or statistics may slip up in the selection task but one box, by following the EDT logic.

So if you're using this approach as an intelligence test, I'd worry about committing the fundamental attribution error pretty often. However, I doubt you're carrying out this test in isolation. In practice, it probably is reasonable to engage people you know or meet in challenging discussions if you're looking for people that are sharp and enjoy that sort of thing. I do it every time I meet someone who seems like they might have some inclination toward that sort of thing.

It might help if you provide some context though - who are you asking and how do you know them? Are you accosting strangers with tricky problems or are you probing acquaintances and friends?

Comment author: RichardKennaway 15 February 2012 01:42:36PM 4 points [-]

How did you respond to Newcomb and Sleeping Beauty the first time you encountered them, before reading any discussion of them?

Comment author: Solvent 15 February 2012 08:52:47PM 2 points [-]

I came across both on LW, and I read discussion immediately.

Comment author: MileyCyrus 15 February 2012 05:08:51AM 3 points [-]

Are you looking for problems with a counter-intuitive, yet widely accepted answer among academics?

Comment author: Solvent 15 February 2012 05:14:16AM 1 point [-]

Well, I'm mostly using these on people who haven't read much or any philosophy, so those would work. That said, I think that a lot of smart people can get to the right answer even when there isn't any consensus in the philosophical community.

Comment author: shminux 15 February 2012 06:43:54AM 3 points [-]

If there is no consensus, how do you know what answer is "right"? Surely if it was a simple matter of computation or logic, there would be a consensus.

Comment author: Jayson_Virissimo 15 February 2012 08:28:20AM 10 points [-]

As far as I can tell, he is judging "rightness" by how closely it approximates Less Wrong doctrine.

Comment author: Karmakaiser 17 February 2012 04:16:48PM 1 point [-]

There are so many variables on where someone's thinking could be biased or incomplete that if one is going to take these questions seriously, I think a heuristic approach would be most helpful rather than seeing if someone independently comes to your conclusion.

Off the top of my head I would give points for trying to falsify themselves, taking into account human bias (if they already had knowledge of the literature on bias), asking clarifying questions instead of going with an incomplete interpretation of the problem, a willingness to be criticized when the criticism is correct, and a willingness to brush badly constructed criticism as side.

Comment author: MileyCyrus 15 February 2012 07:35:26AM 3 points [-]

A standard Bayesian problem would work great. I paid my 13 year old nephew $1 to solve one.

Also: If you call a tail a leg, how many legs does a horse have?

Comment author: JenniferRM 16 February 2012 12:13:34AM *  2 points [-]

Be careful how you reward people for mental tasks if you care about the long term cultivation of their mind.

Comment author: David_Gerard 15 February 2012 08:55:14AM *  1 point [-]

If you call a tail a leg, how many legs does a kangaroo have? If you call an arm a leg, how many legs does a human have? There's a whole sequence on the trouble with putting too much store in the meanings assigned to words.

Comment author: Manfred 15 February 2012 09:17:58AM *  6 points [-]

Surely if it was a simple matter of computation or logic, there would be a consensus.

Optimist, eh? :D

Comment author: Luke_A_Somers 15 February 2012 04:53:35PM 0 points [-]

I'd settle for a well-thought-out answer, even if it's not the one I agree with.

Comment author: JonathanLivengood 16 February 2012 06:50:09AM 2 points [-]

Searle's Chinese Room is a great (awful) case to test out how well people think. The argument can be attacked (successfully) in so many different ways, it is a good marker of both ability to analyze an argument and ability to think creatively. Even better if after your interlocutor kills the argument one way, you ask him or her to kill it another, different way. (Then repeat as desired.)

Comment author: skepsci 16 February 2012 09:43:14AM 1 point [-]

What do you mean by "great (awful)"? Do you mean that the thought experiment itself is an awful argument against AI, but describing the argument is a good way to test how people think?

Comment author: JonathanLivengood 16 February 2012 05:52:05PM 2 points [-]

Yes, that's exactly what I mean. The argument itself is terrible. But it invites so many reasonable challenges that it is still very useful as a test of clear thinking. So, awful argument; great test case.

Comment author: skepsci 16 February 2012 10:09:18AM 1 point [-]

On a related note, I remember the day when I found out my PhD advisor (a computability theorist!) revealed that he believed the argument against AI from Gödel's incompleteness theorem. It was not reassuring.

Comment author: TimS 23 February 2012 07:55:27PM 0 points [-]

Smarter than human AI, or artificial human level general intelligence?

Comment author: skepsci 24 February 2012 09:40:27AM 0 points [-]

The latter.

Comment author: Dmytry 16 February 2012 09:59:58AM *  0 points [-]

Ya.

Picture a room larger than Library of Congress which answers a simplest question in a million years, and the argument entirely dissolves. Imagine some nonsense the way Searle wants you to (small room, talks fast enough), take possibility of such as a postulate, and you'll create yourself a logically inconsistent system* in which you can prove anything including impossibility of AI.

*Postulating that, say, good ol zx spectrum can run human mind equivalent intelligence in real-time on 128 kilobytes of ram, is ultimately postulating a mathematical impossibility, and you should in principle be able to get all the way to 1=2 from there.

Comment author: JonathanLivengood 16 February 2012 06:07:45PM 1 point [-]

I'm not sure I understand the Library of Congress bit, but the footnote is exactly right. Even so, that is only one way of resisting Searle's argument. The point for me is that we can measure cleverness to some tolerance by how many ways one finds to fault the argument. For example:

a. The architecture is completely wrong. People don't work by simple look-up tables.

b. Failure of imagination. We are asked to imagine something that passes the Turing test. Anyone convinced by the argument is probably not imagining that premiss vividly enough.

c. The argument depends on a fallacy of division/composition. Searle argues that the system does not understand Chinese since none of its parts understand Chinese. But some humans understand Chinese, and it is implausible that any individual human cell understands Chinese. So, the argument is logically flawed.

d. In order to have an interactive conversation, the room needs to have something like a memory or history. Understanding isn't just about translation but about connecting language to other parts of life.

e. Similarly to (d), the room is not embodied in any interesting way. The room has no perceptual apparatus and no motor functions. Understanding is partly about connecting language to the world. Intelligence is partly about successful navigation in the world. Connect the room to a robot body and then present the case again.

...

Further challenges could be given, I think. But you get the idea.

Comment author: Dmytry 16 February 2012 06:51:00PM 0 points [-]

I meant, the room got to store many terabytes of information, very well organized too (for the state dump of a chinese speaking person). It's a very big room, library sized, and there's enormous amount of paper that gets processed before it says anything, and enormous timespan.

The argument relies on imagining a room that couldn't possibly have understood anything; imagine the room 'to scale' and the timing to scale, and then assertion that room couldn't possibly have understood anything loses ground.

There's another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.

Comment author: JonathanLivengood 16 February 2012 08:33:04PM 0 points [-]

There's another argument like chinese room, about giant archive of answers to all possible questions. Works by severely under-imagining size of the archive, too.

Agreed.

Comment author: Manfred 15 February 2012 09:32:29AM *  2 points [-]

Brief discussion of free will / determinism, followed by "What observations make you think you have free will?"

If the question is novel, this seems like a fairly straightforward (and open-ended) test of question-answering.

Comment author: J_Taylor 15 February 2012 07:08:44AM 3 points [-]

What in the world is "skill at philosophy"?

I've also been presenting them with contrarian opinions and asking them to evaluate them, and I have a higher opinion of them if they avoid just icking away from the subject.

You have a higher opinion of people who make socially foolish decisions?

Comment author: [deleted] 15 February 2012 07:32:06AM 3 points [-]

I bestow a higher likelihood of long-term closeness on persons who "avoid just icking away from the subject." Their ability to do so in a manner that suggests awareness of social niceties is a bonus.

I sorta think you can't possibly disagree with this, or you wouldn't be here.

Comment author: J_Taylor 18 February 2012 02:34:11AM 1 point [-]

I bestow a higher likelihood of long-term closeness on persons who "avoid just icking away from the subject."

Oh, I apologize. I entirely misread what you were doing, I think.

I sorta think you can't possibly disagree with this, or you wouldn't be here.

Um... kind of? I guess it depends on what sort of contrarian opinions you were sharing and what sort of setting you were doing it in.


The latter part assumed you were mainly replying to the second question I asked. I apologize for the bluntness of those questions, also. However, I would like to clarify my first question slightly.

When I see the phrase "skill at philosophy" it makes me think of professional philosophers. You probably are not trying to test for the kinds of skills which are found in professional philosophers, because most of these skills cannot be tested through informal questioning. I now realize that you were trying to test for, I think, the ability to think logically about philosophical topics and openness to unpopular ideas. Sorry for the misinterpretation.

Comment author: DuncanS 15 February 2012 07:33:37PM 1 point [-]

What in the world is "skill at philosophy"?

On the other hand, I suspect that it is possible to rank people according to their skill at philosophy, and come up with an ordering that's reasonably widely agreed, as long as the points are not too close. Just for fun, here's a few to rank...

So I guess there is such a thing.

Comment author: Risto_Saarelma 16 February 2012 08:42:40AM 2 points [-]

Beyond the obvious signaling opportunity of saying that creationists are the worst people ever, I'm not having an easy time figuring out which way the ranking should go between a celebrity who appears to be totally apathetic towards philosophy and a creationist apologist who is enthusiastically doing very bad philosophy.

I also wonder how much agreement there would be if we tried to establish the ranking between Richard Dawkins and Jerry Fodor.

Comment author: J_Taylor 18 February 2012 02:41:18AM 0 points [-]

I do not really agree with Fodor on most issues, but Jerry Fodor(2010) is very different from Jerry Fodor(1978).

Comment author: DanielLC 15 February 2012 05:35:59AM 1 point [-]

By "hard problem" do you mean harder than "If a tree falls in a forest does it make a sound?" or as hard as the hard problem of consciousness?

Would a Star Trek style teleporter teleport you or result in a new person (in a universe where you can be made of different atoms)? What if it creates the duplicate without destroying the original? Is there any action you can take that preserves identity?

Trolley problem. For that matter, utilitarianism vs. deontological ethics.

Copenhagen vs. Many Worlds. Many Worlds vs. Timeless. Those require an understanding of quantum physics, though.

Comment author: Solvent 15 February 2012 08:58:14PM 1 point [-]

Those are good ideas. I've been using the trolley problem.