Eliezer_Yudkowsky comments on The Useful Idea of Truth - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
(The 'Mainstream Status' comment is intended to provide a quick overview of what the status of the post's ideas are within contemporary academia, at least so far as the poster knows. Anyone claiming a particular paper precedents the post should try to describe the exact relevant idea as presented in the paper, ideally with a quote or excerpt, especially if the paper is locked behind a paywall. Do not represent large complicated ideas as standard if only a part is accepted; do not represent a complicated idea as precedented if only a part is described. With those caveats, all relevant papers and citations are much solicited! Hopefully comment-collections like these can serve as a standard link between LW presentations and academic ones.)
The correspondence theory of truth is the first position listed in the Stanford Encyclopedia of Philosophy, which is my usual criterion for saying that something is a solved problem in philosophy. Clear-cut simple visual illustration inspired by the Sally-Anne experimental paradigm is not something I have previously seen associated with it, so the explanation in this post is - I hope - an improvement over what's standard.
Alfred Tarski is a famous mathematician whose theory of truth is widely known.
The notion of possible worlds is very standard and popular in philosophy; some of them even ascribe much more realism to them than I would (since I regard them as imaginary constructs, not thingies that can potentially explain real events as opposed to epistemic puzzles).
I haven't particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to "There are causal processes producing map-territory correspondences" to "You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions". I would not be surprised to find out it existed, especially on the second clause.
Added: The term "post-utopian" was intended to be a made-up word that had no existing standardized meaning in literature, though it's simple enough that somebody has probably used it somewhere. It operates as a stand-in for more complicated postmodern literary terms that sound significant but mean nothing. If you think there are none of those, Alan Sokal would like to have a word with you. (Beating up on postmodernism is also pretty mainstream among Traditional Rationalists.)
You might also be interested in checking out what Mohandas Gandhi had to say about "the meaning of truth", just in case you were wondering what things are like in the rest of the world outside the halls of philosophy departments.
DevilWorm and pragmatist point to the "reliabilism" school of philosophy (http://en.wikipedia.org/wiki/Reliabilism & http://plato.stanford.edu/entries/reliabilism). Clicking on either link reveals arguments concerned mainly with that old dispute over whether the word "knowledge" should be used to refer to "justified true belief". Going on the wording I'm not even sure whether they're considering how photons from the Sun are involved in correlating your visual cortex to your shoelaces. But it does increase the probability of a precedent - does anyone have something more specific? (A lot of the terminology I've seen so far is tremendously vague, and open to many interpretations...)
Incidentally, there might be an even higher probability of finding some explicit precedent in a good modern AI book somewhere?
It might be too obvious to be worth mentioning. If you're actually building (narrow) AI devices like self-driving cars, then of course your car has to have a way of sensing things round about it if it's going to build a map of its surroundings.
This fact should be turned into an SMBC cartoon.
That's what I was thinking. Maybe in something like Knowledge Representation and Reasoning.
AI books tend to assume that one pretty explicitly. For those of a more philosophical bent, some might say something like "The world pushes back", but it's not like anyone doing engineering is in the business of questioning whether the external world exists.
Epistemology and the Psychology of Human Judgment (badger's summary) seems relevant, as one of the things they do is attack reliabilism's uselessness. I don't recall any direct precedents, but it's been a while since I read it.
Bishop & Trout call their approach "strategic reliabilism." A short summary is here. It's far more Yudkowskian than normal reliabilism. LWers may also enjoy their paper The Pathologies of Standard Analytic Epistemology.
That was a pretty cool paper. I don't think I've ever seen SPRs in a philosophy paper before.
For the curious, I interviewed Michael Bishop a couple years ago.