Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MarsColony_in10years comments on Making Beliefs Pay Rent (in Anticipated Experiences) - Less Wrong

110 Post author: Eliezer_Yudkowsky 28 July 2007 10:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (246)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: MarsColony_in10years 19 February 2015 07:06:44PM *  1 point [-]

The LessWrong FAQ says that there is value in replying to old content, so I'm commenting in hopes that it is useful to someone in the future, and just for the sake of organizing my thoughts.

I would have phrased this differently than Yudkowsky, but I think I understand the concept he was getting at when he gave this example:

Or suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a "post-utopian". What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn't connect to sensory experience at all.

His point is that this is just semantics. It makes no difference to the world whether we label something "post-utopian" or "aegffsdfa eereraksrfa" or anything else. The words you read in the book will be the same. The reason I don’t like this example is that, if I actually knew some literary jargon, I might get some real verifiable information that does actually mean I should expect a specific kind of sensory experience. It’s just that the classification scheme is arbitrary, and so is my belief that one classification scheme is "correct".

The label is just a label, so arguing about classification schemes is just semantics. Using this definition, your belief that the crusades took place would affect what sorts of things you would expect to read, and what sorts of archeological finds you would expect to find if you went looking for them. However, if you believe that the crusades marked the beginning of the high middle ages, that would just be semantics. We could say that the middle ages started at the sacking of Rome, or we could make a label like "dark ages" to describe the intermediary period. What we call it and how we classify it makes no difference in the actual reality of history. It's just semantics.

Comment author: TheAncientGeek 23 February 2015 12:38:23PM 3 points [-]

Semantic labels are part of the structure of an explicit model. For instance, the Chinese use the same word for both "rat" and "mouse". A model with a ratmouse vertex will behave differently to a model with separate rat and mouse verteces. The structure and function of model affect what it predicts, what it's users can notice, how they behave. Agents do not passively receive a stream of predetermined experiences, they interact with the world, and the experiences they can expect depend on the structure and function of their models...

..and more besides. Models contain evaluative weightings as well as neutral structure. For instance, in the English speaking world, mice have the connotation of being cute, rats of being vermin. The professor might not be failing to specify an empirical confirmable concept when describing the writer as a post utopian: she might rather be succeeding in tweaking her students' evaluative model. She might be aiming at making a social or political point.

There is a long history of the political influence of language ranging from Greek rhetoricIan's to Orwell' s essays. A STEM type might consider it pointless, to focus on such issues, rather than what can be proved objectively. A humanities type might also consider it pointless to focus on objective, empirical claims with no social or political upshot. Neither complaint is really about meaningfullness or semantics, in the sense if the meaningfulness of the words, rather they are both about the subjectively evaluated pointfulness of an activity.

By a convoluted meta level irony, the way the way the term "semantics" is often used is itself a way if funneling the reader towards a conclusion. We have seen that there are circumstances where a semantic change would make a difference: where it makes a structural/functional change, and where it makes an evaluative/connotational difference. Since these circumstances don't always to apply, there are circumstances where a semantic change really is trivial, really "just semantics". For instance, if the word cat were replaced by the word zeb, in a connotationally neutral way, that would be semantics of a pointless kind that doesn't change anything. But that situation is atypical. Although the standard rhetoric about what is "just semantic" suggests the opposite., most rewordings make a difference. Indeed, it is likely that people object to recordings because they do make a difference, not because they don't.

Consider: A: So youre pro abortion? B: I'm pro choice A: Thats just semantics.

A has spotted that B's rewording has strengthened his argument, by introducing a phrasing with a positive connotation, and so she objects to it... using the common apprehension that rewordings are just semantics, and don't change anything!

Comment author: MarsColony_in10years 24 February 2015 10:11:09PM 0 points [-]

Thanks for breaching that topic. I considered pointing out that my "aegffsdfa eereraksrfa" example might be more difficult to pronounce than "post-utopian", and so actually would have an impact on the world in general. On reflection, I decided to make the assertion that it "makes no difference", since that would spare a lot of confusion. It's a good first order approximation. When introducing a topic, it's important to take the Bohr model view of the world before trying to explain quarks and leptons.

The entanglement of semantic language with our interpretation of reality clouds things. Scientific language is precise, but often dry and hard to understand. However, by de-coupling the two worlds, we study the underlying reality without those (or perhaps with only minimal) distorting effects from our language. That's what we are doing when we talk about Map and Territory here on LW. We get a better map from this, but if we also compare the collective maps of societies to the best maps of reality, we can look for systematic differences. Some of these are cognitive biases, which we tend to concentrate on here on LW. However, there are also many other interesting or useful things that we can learn about ourselves as mapmakers. For example, the Bouba/kiki effect might help us choose more intuitive vocabulary as we build a more and more extensive set of jargon.

Just studying the way languages evolve can be informative, whether it's rigorously using Computational Linguistics or informally by an author or artist. The mere existence of a formal scientific understanding of reality allows a poet or philosopher, if they are familiar only with the answers but not the underlying explanations, to look at some facet of human nature and ask "isn't it odd when people...". A great deal of social commentary is built from that one question.