RobinZ comments on The Useful Idea of Truth - Less Wrong

77 Post author: Eliezer_Yudkowsky 02 October 2012 06:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 02 October 2012 05:26:28AM 5 points [-]

Koan answers here for:

What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?

Comment author: RobinZ 02 October 2012 02:47:52PM 9 points [-]

Before reading other answers, I would guess that a statement is meaningful if it is either implied or refuted by a useful model of the universe - the more useful the model, the more meaningful the statement.

Comment author: RobinZ 02 October 2012 02:59:31PM 0 points [-]

Looking at Furslid's answer, I discovered that my definition is somewhat ambiguous - a statement may be implied or refuted by quite a lot of different kinds of models, some of which are nearly useless and some of which are anything but, and my definition offers no guidance on the question of which model's usefulness reflects the statement's meaningfulness.

Plus, I'm not entirely sure how it works with regards to logical contradictions.

Comment author: [deleted] 02 October 2012 08:45:10PM *  1 point [-]

Where Recursive Justification Hits Bottom and its comment thread should be interesting to you.

In the end, we have to rely on the logical theory of probability (as well as standard logical laws, such as the law of noncontradiction). There is no better choice.

Using Bayes' theorem (beginning with priors set by Occam's Razor) tells you how useful your model is.

Comment author: RobinZ 03 October 2012 02:40:44AM 4 points [-]

I think I was unclear. What I was considering was along the following lines:

Take the example from the article. Let us stipulate that the professor's use of the terms "post-utopian" and "colonial alienization" is, for all practical purposes, entirely uninformative about the authors and works so described.

Any worthwhile model of the professor's grading criteria will include the professor's list of "post-utopian" works. These models will not be very useful, however.

Any sufficiently-detailed model of the entire universe, on the other hand, will include the professor, and therefore the professor's list - but will be immensely useful thanks to the other details it includes.

Which model should we refer to when considering the statement's meaningfulness, then?

What occurred to me just now, as I wrote out the example, is the idea of simplicity. If you penalize models that add complexity without addition of practical value, the professor's list will be rapidly cut from almost any model more general than "what answer will receive a good grade on this professor's tests?"

Comment author: [deleted] 02 October 2012 08:33:16PM *  0 points [-]

This is incontrovertibly the best answer given so far. My answer was that a proposition is meaningful iff an oracle machine exists that takes as input the proposition and the universe, outputs 0 if the proposition is true and outputs 1 if the proposition is false. However, this begs the question, because an oracle machine is defined in terms of a "black box".