In response to comment by [deleted] on Gun Control: How would we know?
Comment author: Jabberslythe 21 December 2012 02:01:36AM 12 points [-]

Canada actually has fewer guns per capita. The US is definitely topping that list.

Comment author: JonathanLivengood 21 December 2012 08:04:40AM 2 points [-]

A more interesting number for the gun control debate is the percentage of households with guns. That number in the U.S. has been declining -- pdf, but it is still very high in comparison with other developed nations.

However, exact comparisons of gun ownership rates internationally are tricky. The data is often sparse or non-uniform in the way it is collected. The most consistent comparisons I could find -- and I'd love to see more recent data -- were from the 1989 and 1992 International Crime Surveys. The numbers are reported in this paper on gun ownership, homicide, and suicide -- pdf. These data are old, but in 1989, about 48% of U.S. households had a firearm of some kind, compared with 29% of Canadian households. However, the numbers for handguns specifically were very different. In 1989, only 5% of Canadian households had a handgun, compared with 28% of U.S. households.

Comment author: jsalvatier 20 December 2012 07:40:57PM 0 points [-]

Ah, okay, that makes sense then.

I think part of why I think I'm confused is that none of the courses you proposed are focused on psychology (heuristics and biases being the standard recommendation). Any reason for that?

Comment author: JonathanLivengood 20 December 2012 08:08:32PM 1 point [-]

That's a good point. Looks like an oversight on my part. I was probably overly focused on the formal side that aims to describe normatively correct reasoning. (Even doing that, I missed some things, e.g. decision theory.) I hope to write up a more detailed, concrete, and positive proposal in the next couple of days. I will include at least one -- and probably two -- courses that look at failures of good reasoning in that recommendation.

Comment author: jsalvatier 19 December 2012 08:48:28PM 0 points [-]

Hmm, is that best accomplished by trying to reappropriate the word 'logic'? Mathematicians, philosophers etc. seem like they have a pretty firm idea about they mean by 'logic' and going against is probably hard. Trying to get a Heuristics and Biases or statistics course into the logic curriculum seems like it would get a lot of pushback. Can the word 'logic' itself be that valuable? Why not pick a new word?

Comment author: JonathanLivengood 19 December 2012 11:32:01PM 0 points [-]

I don't want to have a dispute about words. What I mean when I talk about the logic curriculum in my department, I have in mind the broader term. The entry-level course in logic does have some probability/statistics content, already. There isn't a sub-program in logic, like a minor or anything, that has a structural component for anyone to fight about. I would like to see more courses dedicated to probability and induction from a philosophical perspective. But if I get that, then I'm not going to fight about the word "logic." I'd be happy to take a more generic label, like CMU's logic, computation, and methodology.

Comment author: jsalvatier 17 December 2012 11:05:58AM 0 points [-]

If you agree here, I'm curious why you're focusing on reforming the logic curriculum? Why not focus on shifting resources from teaching logic to teaching the standard things recommended here (probability theory, heuristics and biases, psychology etc.).

Comment author: JonathanLivengood 17 December 2012 05:52:02PM 0 points [-]

Because I see those things as part of logic. As I see it, logic as typically taught in mathematics and philosophy departments from 1950 on dropped at least half of what logic is supposed to be about. People like Church taught philosophers to think that logic is about having a formal, deductive calculus, not about the norms of reasoning. I think that's a mistake. So, in reforming the logic curriculum, I think one goal should be to restore something that has been lost: interest in norms of reasoning across the board.

Comment author: RichardKennaway 13 December 2012 08:40:14AM 2 points [-]

For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.

This is not an explanation: it is simply saying "evolution did it". An explanation should exhibit the mechanism whereby the concept is acquired.

It's more like Hume's story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?

That is one way of presenting the thought experiment.

Since we are now imagining a creature that has different faculties than an ordinary human

Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human.

Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, "we design it that way" is no more of an answer than "God made us that way" or "evolution made us that way". Consider the question in the spirit of Jaynes' use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl's works presuppose an existing knowledge of causation, but do not tell us how to first acquire it.

I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? ...

That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?

Comment author: JonathanLivengood 14 December 2012 11:10:07PM *  0 points [-]

I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:

(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.

I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.

Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.

EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.

Comment author: RichardKennaway 12 December 2012 11:45:07PM 0 points [-]

I think these are all aspects of the same thing: how might an intelligent entity arrive at correct knowledge about causes, starting from a lack of even the concept of a cause?

Comment author: JonathanLivengood 13 December 2012 01:25:43AM 1 point [-]

That seems like a very different question than, say, how humans actually came by their tendency to attribute causation. For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.

What you seem to be after is very different. It's more like Hume's story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?

Since we are now imagining a creature that has different faculties than an ordinary human (or at least, that seems likely, given how automatic causal perception in launching cases is and how humans seem to think about their own agency), I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? ...

Comment author: RichardKennaway 12 December 2012 09:04:20AM 0 points [-]

Richard, do you think Pearlian causality is mathematics or something else?

It's applied mathematics.

That is, you can erect the entire thing as pure mathematics, as you can with, say, probability and statistics, or rational mechanics. The motivation is to apply it to the real world, and the language may sound like it's talking about the real world, but that's just a way of thinking about the pure mathematics. Then to apply it to the real world, you need to step beyond mathematics and say what real-world phenomena you are going to map the mathematical concepts to.

Pearl is insistent that the concept of causality is primitive and not reducible to statistics, but I haven't ever read him philosophising about "what causes really are". He just takes them as primitive and understood: do(X=x) means "set the value of X to x", although that is clearly an unsafe instruction to give an AGI. You would have to at least amplify it with something like "without having any influence on any other variable except via the causal arrows you are willing to allow might exist".

There appears to be some dispute on this issue. I'd be interested to know his answer to the conundrum posed by Scott Aaronson, or for that matter the similar one I posed here. (I am not satisfied by any of the answers in either place.)

Comment author: JonathanLivengood 12 December 2012 10:23:16PM 0 points [-]

When you ask (in your koan) how the process of attributing causation gets started, what exactly are you asking about? Are you asking how humans actually came by their tendency to attribute causation? Are you asking how an AI might do so? Are you asking about how causal attributions are ultimately justified? Or what?

Comment author: JonathanLivengood 11 December 2012 06:56:21PM *  6 points [-]

What do you think about debates about which axioms or rules of inference to endorse? I'm thinking here about disputes between classical mathematicians and varieties of constructivist mathematicians, which sometime show themselves in which proofs are counted as legitimate.

I am tempted to back up a level and say that there is little or no dispute about conditional claims: if you give me these axioms and these rules of inference, then these are the provable claims. The constructivist might say, "Yes, that's a perfectly good non-constructive proof, but only a constructive proof is worth having!" But then, in a lot of moral philosophy, you have the same sort of agreement. Given a normative moral theory and the relevant empirical facts, moral philosophers are unlikely to disagree about what actions are recommended. The controversy is at the level of which moral theory to endorse. At least, that's the way it looks to me.

Comment author: lukeprog 10 December 2012 03:51:19PM *  4 points [-]

I don't know whether there is a good way to both (a) make the points you want to make about improving philosophy education and (b) make the stronger reading unlikely.

Yes; hopefully I can do better in my next post.

if you couldn't have your whole mega-course, ...what one or two concrete course offerings would you want to see in every philosophy program?

One course I'd want in every philosophy curriculum would be something like "The Science of Changing Your Mind," based on the more epistemically-focused stuff that CFAR is learning how to teach to people. This course offering doesn't exist yet, but if it did then it would be a course which has people drill the particular skills involved in Not Fooling Oneself. You know, teachable rationality skills: be specific, avoid motivated cognition, get curious, etc. — but after we've figured out how to teach these things effectively, and aren't just guessing at which exercises might be effective. (Why this? Because Philosophy Needs to Trust Your Rationality Even Though It Shouldn't.)

Though it doesn't yet exist, if such a course sounds as helpful to you as it does to me, then you could of course try to work with CFAR and other interested parties to try to develop such a course. CFAR is already working with Nobel laureate Saul Perlmutter at Berkeley to develop some kind of course on rationality, though I don't have the details. I know CFAR president Julia Galef is particularly passionate about the relevance of trainable rationality skills to successful philosophical practice.

What about courses that could e.g. be run from existing textbooks? It is difficult to suggest entry-level courses that would be useful. Aaronson's course Philosophy and Theoretical Computer Science could be good, but it seems to require significant background in computability and complexity theory.

One candidate might be a course in probability theory and its implications for philosophy of science — the kind of material covered in the early chapters of Koller & Friedman (2009) and then Howson & Urbach (2005) (or, more briefly, Yudwkosky 2005).

Another candidate would be a course on experimental philosophy, perhaps expanding on Alexander (2012).

Comment author: JonathanLivengood 11 December 2012 04:50:30AM 4 points [-]

Though it doesn't yet exist, if such a course sounds as helpful to you as it does to me, then you could of course try to work with CFAR and other interested parties to try to develop such a course.

I am interested. Should I contact Julia directly or is there something else I should do in order to get involved?

Also, since you mention Alexander's book, let me make a shameless plug here: Justin Sytsma and I just finished a draft of our own introduction to experimental philosophy, which is under contract with Broadview and should be in print in the next year or so.

Comment author: Luke_A_Somers 10 December 2012 03:49:15PM *  3 points [-]

If you're going to back off on the gating, you need to provide sufficient guidance to the students on what they will practically need to know that they can make an informed choice. I took a course in baroque music that went very badly. If I had known how much music theory I would have to have, and how much facility I would have to have with it, I would not have taken the course.

Comment author: JonathanLivengood 11 December 2012 04:37:34AM 0 points [-]

Good point.

View more: Prev | Next