All of glaucon's Comments + Replies

glaucon20

I think that logical positivism generally is self-refuting. It typically makes claims about what is meaningful that would be meaningless under its own standards. It generally also depends on an ideas about what counts as observable or analytically true that also are not defensible—again, under its own standards. It doesn't change things to say formulate it as a methodological imperative. If the methodology of logical positivism is imperative, then on what grounds? Because other stuff seems silly?

I am obviously reading something into lukeprog's post that ma... (read more)

0Rob Bensinger
Let's try to unpack what 'self-refuting' could mean here. Do you mean that logical positivism is inconsistent? If so, how? A meaningless statement is not truth-apt, so it can't yield a contradiction. And you haven't suggested that positivists assert 'Non-empirical statements are meaningless' is both meaningful and meaningless. What, precisely, is wrong with positivists asserting 'Non-empirical statements are meaningless,' and asserting that the previous sentence is meaningless as well? You're framing it as an internal problem, but the more obvious and compelling problems are all external. (I.e.: Their theory of meaning is coherent and intelligible, at the very least from an outsider's perspective; it just isn't remotely plausible.) Here I agree, except 'under its own standards' isn't doing any important work. Logical positivism's views are not inconsistent; they're just silly and unmotivated. There is no reason for us to adopt its standards in the first place. Speaking for myself, I think it's very important for us to unpack what we mean by epistemic justification (as opposed to moral and other forms of justification). For instance, it's very difficult to understand 'rationality' without an understanding of the normative dimension of 'knowledge.' But the words 'knowledge' and 'justification' themselves aren't magical. If we need to taboo them away for purposes of rigorous philosophy, then re-introduce them only for pragmatic/rhetorical purposes in persuading laypeople, that's fine. The traditional philosophical way of framing the question, as 'What is knowledge?', is unhelpful and confusing because it conflates the semantic question 'What do we mean by the word "knowledge"?' with the much deeper and more important questions beneath the surface. Similarly, I think a lot of recent work in the metaphysics of causality unhelpfully conflates conceptual analysis with metaphysical hypothesizing; both are important topics (and important work may be done on either topic u
glaucon00

This is a very good point. If we agree cognitive biases make our understanding of the world flawed, why should we assume that our moral intuitions aren't equally flawed? That assumption makes sense only if you actually equate morality with our moral intuitions. This isn't what I mean by the word "moral" at all—and as a matter of historical fact many behaviors I consider completely reprehensible were at one time or another widely considered to be perfectly acceptable.

glaucon120

I agree that there is good work to be done with math in all of those fields. But there's plenty of good work in most of them that can be done without math too.

there's plenty of good work in most of them that can be done without math too

Yes. Two caveats:

1) The person doing the good work without math should remember to consult someone with the math skills before publishing their results, if they are trying to say something math-like.

For example, to invent a hypothesis, design an experiment and collect data, the math may be unnecessary. But it becomes necessary at the last step when the experimenter says: "So I did these experiments 10 times: 8 times the results seemed to support my hypothesis, 2 times they... (read more)

glaucon130

The things on your curriculum don't seem like philosophy at all in the contemporary sense of the word. They are certainly very valuable at figuring out the answers to concrete questions within their particular domains. But they are less useful for understanding broader questions about the domains themselves or the appropriateness of the questions. Learning formal logic, for example, isn't that much help in understanding what logic is. Likewise, knowing how people make moral decisions is not at all the same as knowing what the moral thing to do would be. I ... (read more)

6orthonormal
Have you taken a math class in formal logic? (The one with models, proofs, soundness and completeness, Gödel's Theorem, etc, not the ersatz philosophy-department one that thinks syllogisms are complicated.) I'd be surprised if you had, and still considered it irrelevant to doing philosophy well.

Learning formal logic, for example, isn't that much help in understanding what logic is.

It certainly doesn't hurt! Learning formal logic gives you data with which to test meta-logical theories. Moreover, learning formal logic helps in understanding everything; and logic is one of the things, so, there ya go. Instantiate at will.

Likewise, knowing how people make moral decisions is not at all the same as knowing what the moral thing to do would be.

Sure. But for practical purposes (and yes, there are practical philosophical purposes), you can't be succ... (read more)

2Peterdjones
Quite. It is a perfectly coherent possibility that the moral instincts given to us by evolution are broken in some way, so that studying morlaity form the evolutionary perspective does't resolve the "what is the right thing to do" question at all. The interesting thing here is that a lot of material on LW is dedicated to an exactly parallel with argument about ratioanlity: our rationality is broken and needs to be fixed. How can EY be so open to the one possibility and so oblivious to the other?

The things on your curriculum don't seem like philosophy at all in the contemporary sense of the word.

Reforming phil. and leaving it alone are not the only options. There is also the option of setting up a new cross-disciplinary subject parallel to Cognitive Science

glaucon40

If your point is that it isn't necessarily useful to try to say in what sense our procedures "correspond," "represent," or "are about" what they serve to model, I completely agree. We don't need to explain why our model works, although some theory may help us to find other useful models.

But then I'm not sure see what is at stake when you talk about what makes a proof correct. Obviously we can have a valuable discussion about what kinds of demonstration we should find convincing. But ultimately the procedure that guides our behavior either gives satisfactory results or it doesn't; we were either right or wrong to be convinced by an argument.

glaucon120

I thought of a better way of putting what I was trying to say. Communication may be orthogonal to the point of your question, but representation is not. An AI needs to use an internal language to represent the world or the structure of mathematics—this is the crux of Wittgenstein's famous "private language argument"—whether or not it ever attempts to communicate. You can't evaluate "syntactic legality" except within a particular language, whose correspondence to the world is not given a matter of logic (although it may be more or less useful pragmatically).

6Eliezer Yudkowsky
See my reply to Chappell here and the enclosing thread: http://lesswrong.com/lw/f1u/causal_reference/7phu
glaucon50

The mathematical realist concept of "the structure of mathematics"—at least as separate from the physical world—is problematic once you can no longer describe what that structure might be in a non-arbitrary way. But I see your point. I guess my response would be that the concept of "a proof"—which implies that you have demonstrated something beyond the possibility of contradiction—is not what really matters for your purposes. Ultimately, how an AI manipulates its representations of the world and how it internally represents the world ar... (read more)

glaucon130

The motivation for the extremely unsatisfying idea that proofs are social is that no language—not even the formal languages of math and logic—is self-interpreting. In order to understand a syllogism about kittens, I have to understand the language you use to express it. You could try to explain to me the rules of your language, but you would have to do that in some language, which I would also have to understand. Unless you assume some a priori linguistic agreement, explaining how to interpret your syllogism, requires explaining how to interpret the langua... (read more)

From my perspective, when I've explained why a single AI alone in space would benefit instrumentally from checking proofs for syntactic legality, I've explained the point of proofs. Communication is an orthogonal issue, having nothing to do with the structure of mathematics.