You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on Empirical claims, preference claims, and attitude claims - Less Wrong Discussion

5 Post author: John_Maxwell_IV 15 November 2012 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 23 March 2014 03:16:03AM 0 points [-]

But all assertions have implicatures, even paradigmatically empirical ones. And all assertions convey at least as much information about the beliefs and values of the asserter as they do about the thing asserted.

I disagree. The claim "Justin Bieber sucks" conveys information about the preferences of the speaker to a greater degree than "Windows sucks".

It only follows that all propositions of the form <x sucks>, where "sucks" is used in the Madonna way and not the Windows way, are false propositions.

Sure, you can call them false, but they're an interesting subset of false propositions that are false not because they make incorrect statements about the world but because they don't correspond to real-world properties. And it may be useful to hack your brain to think of such a proposition as "true" self-efficacy purposes.

(6) Similarly, it can't mean that "Madonna sucks" is an incorrigible belief. New data could convince me that Madonna doesn't suck after all — that she no longer sucks (because her new CD is excellent), or that she never sucked in the first place (because I mistook someone else's music for hers, or because my music-evaluating faculties were impaired when I first listened to her).

You would be being kinda silly though because as you say, "Madonna sucks" corresponds to no real-world property. From a purely pragmatic perspective, you experience no loss regardless of the truth value you assign to statements that have dangling pointers to things that aren't real-world properties. So you might as well choose whatever truth value you want for the purpose of helping your brain get things done.