Eliezer_Yudkowsky comments on The Useful Idea of Truth - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
1) I don't see that this really engages the criticism. I take it you reject that the subjects of truth and reference are important to you. On this, two thoughts:
a) This doesn't affect the point about the reliability of blogging versus research. The significance of the irrationality maybe, but the point remains. You may hold that the value to you of the creative process of explicating your own thoughts is sufficiently high that it trumps the value of coming to optimally informed beliefs - that the cost-benefit analysis favours blogging. I am sceptical of this, but would be interested to hear the case.
b) It seems just false that you don't care about these subjects. You've written repeatedly on them, and seem to be aiming for an internally coherent epistemology and semantics.
2) My claim was that your lack of references is evidence that you don't accord importance to experts on truth and meaning, not that there are specific things you should be referencing. That said, if your claim is ultimately just the observation that truth is useful as a device for so-called semantic ascent, you might mention Quine (see the relevant section of Word and Object or the discussion in Pursuit of Truth) or the opening pages of Paul Horwich's book Truth, to give just two examples.
3) My own view is that AI should have nothing to do with truth, meaning, belief or rationality - that AI theory should be elaborated entirely in terms of pattern matching and generation, and that philosophy (and likewise decision theory) should be close to irrelevant to it. You seem to think you need to do some philosophy (else why these posts?), but not too much (you don't have to decide whether the sorts of things properly called 'true' are sentences, abstract propositions or neural states, or all or none of the above). Where the line lies and why is not clear to me.
I'm saying, "Show me something in particular that I should've looked at, and explain why it matters; I do not respond to non-specific claims that I should've paid more homage to whatever."
As far as I can see, your point is something like:
"Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken." (or, "unless you can produce such a thing...")
Is this right? In any case, I don't see that the conditional is correct. I can only give examples of works which would help. Here are three more. Your second part seeks (as I understand it) a theory of meaning which would imply that your ' Elaine is a post-utopian' is meaningless, but that 'The photon continues to exist...' is both meaningful and true. I get the impression you think that an adequate answer could be articulated in a few paragraphs. To get a sense of some of the challenges you might face -ie, of what the project of contriving a theory of meaning entails- consider looking at Stephen Schiffer's excellent Remnants of Meaning and The Things we Mean or Scott Soames's What is Meaning? .
I think it's more like
"Your reasoning implies I should have read some specific idea, but so far you haven't given me any such idea and why it should matter, only general references to books and authors without pointing to any specific idea in them"
Part of the talking-past-each-other may come from the fact that by "thing", Eliezer seems to mean "specific concept", and you seem to mean "book".
There also seems to be some disagreement as to what warrants references - for Eliezer it seems to be "I got idea X from Y", for you it's closer to "Y also has idea X".