This is a very good point. If we agree cognitive biases make our understanding of the world flawed, why should we assume that our moral intuitions aren't equally flawed? That assumption makes sense only if you actually equate morality with our moral intuitions. This isn't what I mean by the word "moral" at all—and as a matter of historical fact many behaviors I consider completely reprehensible were at one time or another widely considered to be perfectly acceptable.
I agree that there is good work to be done with math in all of those fields. But there's plenty of good work in most of them that can be done without math too.
there's plenty of good work in most of them that can be done without math too
Yes. Two caveats:
1) The person doing the good work without math should remember to consult someone with the math skills before publishing their results, if they are trying to say something math-like.
For example, to invent a hypothesis, design an experiment and collect data, the math may be unnecessary. But it becomes necessary at the last step when the experimenter says: "So I did these experiments 10 times: 8 times the results seemed to support my hypothesis, 2 times they...
The things on your curriculum don't seem like philosophy at all in the contemporary sense of the word. They are certainly very valuable at figuring out the answers to concrete questions within their particular domains. But they are less useful for understanding broader questions about the domains themselves or the appropriateness of the questions. Learning formal logic, for example, isn't that much help in understanding what logic is. Likewise, knowing how people make moral decisions is not at all the same as knowing what the moral thing to do would be. I ...
Learning formal logic, for example, isn't that much help in understanding what logic is.
It certainly doesn't hurt! Learning formal logic gives you data with which to test meta-logical theories. Moreover, learning formal logic helps in understanding everything; and logic is one of the things, so, there ya go. Instantiate at will.
Likewise, knowing how people make moral decisions is not at all the same as knowing what the moral thing to do would be.
Sure. But for practical purposes (and yes, there are practical philosophical purposes), you can't be succ...
The things on your curriculum don't seem like philosophy at all in the contemporary sense of the word.
Reforming phil. and leaving it alone are not the only options. There is also the option of setting up a new cross-disciplinary subject parallel to Cognitive Science
If your point is that it isn't necessarily useful to try to say in what sense our procedures "correspond," "represent," or "are about" what they serve to model, I completely agree. We don't need to explain why our model works, although some theory may help us to find other useful models.
But then I'm not sure see what is at stake when you talk about what makes a proof correct. Obviously we can have a valuable discussion about what kinds of demonstration we should find convincing. But ultimately the procedure that guides our behavior either gives satisfactory results or it doesn't; we were either right or wrong to be convinced by an argument.
I thought of a better way of putting what I was trying to say. Communication may be orthogonal to the point of your question, but representation is not. An AI needs to use an internal language to represent the world or the structure of mathematics—this is the crux of Wittgenstein's famous "private language argument"—whether or not it ever attempts to communicate. You can't evaluate "syntactic legality" except within a particular language, whose correspondence to the world is not given a matter of logic (although it may be more or less useful pragmatically).
The mathematical realist concept of "the structure of mathematics"—at least as separate from the physical world—is problematic once you can no longer describe what that structure might be in a non-arbitrary way. But I see your point. I guess my response would be that the concept of "a proof"—which implies that you have demonstrated something beyond the possibility of contradiction—is not what really matters for your purposes. Ultimately, how an AI manipulates its representations of the world and how it internally represents the world ar...
The motivation for the extremely unsatisfying idea that proofs are social is that no language—not even the formal languages of math and logic—is self-interpreting. In order to understand a syllogism about kittens, I have to understand the language you use to express it. You could try to explain to me the rules of your language, but you would have to do that in some language, which I would also have to understand. Unless you assume some a priori linguistic agreement, explaining how to interpret your syllogism, requires explaining how to interpret the langua...
From my perspective, when I've explained why a single AI alone in space would benefit instrumentally from checking proofs for syntactic legality, I've explained the point of proofs. Communication is an orthogonal issue, having nothing to do with the structure of mathematics.
I think that logical positivism generally is self-refuting. It typically makes claims about what is meaningful that would be meaningless under its own standards. It generally also depends on an ideas about what counts as observable or analytically true that also are not defensible—again, under its own standards. It doesn't change things to say formulate it as a methodological imperative. If the methodology of logical positivism is imperative, then on what grounds? Because other stuff seems silly?
I am obviously reading something into lukeprog's post that ma... (read more)