If you've spent any time with foreigners learning your language, you may have been in conversations like this:

Mei: I'm a bit confused ... what's the difference between "even though" and "although"?

Albert: Um, I think they're mostly equivalent, but "even though" is a bit more emphatic.

Barry: Are you sure ? I remember something about one being for positives, and the other for negatives. For example, let's see, these sentences sound a bit weird:"He refused to give me the slightest clue, although I begged on my knees", and "Although his car broke down on the first mile, he still won the rally".

People can't automatically state the rules underlying language, even though they follow them perfectly in their daily speech. I've been made especially aware of this when teaching French to Chinese students, where I had to frequently revise my explanation, or just say "sorry, I don't know what the rule is for this case, you'll just have to memorize it". You learn separately how to speak the language and how to apply the rules.

Morality is similar: we feel what's wrong and what's right, but may not be able to formulate the underlying rules. And when we do, we're likely to get it wrong the first time. For example you might say:

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality.

But unlike grammar, people don't always agree on right and wrong : if Alfred unintentionally harms Barry, Barry is more likely to think that what Alfred did was morally wrong, even if both started off with similar moral intuitions. So if you come up with an explanation and insist it's the definition of morality, you can't be "proven wrong" nearly as easily as on grammar. You may even insist your explanation is true, and adjust your behavior accordingly, as some religious fanatics seem to do ("what is moral is what God said" being a quite common rule people come up with to explain morality).

So: beware of your own explanations. Morality is a complex topic, you're even more likely to shoot yourself in the foot than with grammar, and even less likely to realize that you're wrong.

(edit) Related posts by Eliezer: Fake Justification, Fake Selfishness, Fake Morality.

New Comment
62 comments, sorted by Click to highlight new comments since:

To extend the metaphor: it is a well-known observation among linguists and amateur grammarians that the more you think about edge cases the worse your intuition becomes.

I wonder if there is a similar hazard from spending too much time thinking about weird hypothetical situations.

"Hard cases make for bad law."

I'm not /sure/ that there is such an effect, but I would be willing to bet in favor of it. Not standing next to someone who obsesses over the Trolley Problem when I'm waiting for a train, that sort of thing.

Actually, this is a point that I've wondered about for a while. Trolley problems rely on the Least Convenient Possible World constraint to force your decision. While this is great for helping you to investigate your intuitions, it's terrible for saving people from trolleys.

Maybe sometimes you need the guy who can make the cold, utilitarian call and choose who lives and who dies. But more often than not you want the guy who will actually try to think of a way to save everybody. I'd hate for someone to die on the tracks because the guy who could have saved him/her thought that derailing the trolley was cheating.

TLDR: Spending too much time on artificially constrained problems seems like it could optimize you away from being able to think in real-world situations.

"If the real world was maximally inconvenient, we would all be dead by now."

This idea has practical applications for anyone who designs, tests, and/or releases for use complex systems, such as vehicles, computing systems (hardware or software), buildings, and so on. The first time you're part of a team that does this, you spend a lot of the last days before release obsessing about the worst-cases that could assail your system, and then you test for them. This is all well and good the first time.

It's when you work on designing your next system that you have to watch out, because you can be so focused on edge cases that you fail to design a system that just basically works well without costing too much.

Hence, the counterbalancing principle of design: KISS (Keep It Simple, Stupid!)

I don't think spending time on artificially constrained will optimize you away from thinking the right way in real-world situations, just that it only develops one of the skills that is useful.

If you're preparing to row across the Atlantic Ocean, it's probably a bad idea to spend all your energy improving your upper-body strength; you should also learn a lot about weather, about nutrition and physiology, etc.

"This is an example of what I call "lifeboat questions"--ethical formulations such as"What should a man do if he and another man are in a lifeboat that can hold only one?" First, every code of ethics must be based on a metaphysics--on a view of the world in which man lives. But man does not live in a lifeboat--in a world in which he must kill innocent men to survive."

-Ayn Rand

This is my impression of UDT.

Which implies that you treat decision theory like grammar and morality rather than a reducible abstract theory.

Actually, your metaphor is more apt than you give it credit for. Native speakers can't be 'proven wrong' in their use of the language as 1) no language has a formal grammar and 2) to the extent that there are rules they are extracted from the way native speakers use the language. Something like morality, then.

Come to think of it, this can be used to construct a pretty intuitive response to those that claim that 'without god there is no objective morality and therefore society will collapse'. There is no formal grammar for English, and yet we're able to communicate pretty well.

[another stray thought] ...so then CEV would be like trying to extract a fully formal grammar for a given language, only harder.

Native speakers can't be 'proven wrong' in their use of the language

Right, but they can be proven wrong in the explanations they give about their use of language (except for rare pathological sentences, speakers of the same language agree which sentences "feel wrong"). Disproving an explanation about one's morality is much harder.

(I don't know if I'm disagreeing with you here)

[-]ata10

[another stray thought] ...so then CEV would be like trying to extract a fully formal grammar for a given language, only harder.

Not really, CEV doesn't depend on everyone's extrapolated volitions converging to total moral agreement.

(I tried to come up with a better CEV analogy, but I'm not sure if that's doable without stretching the metaphor far beyond any real explanatory usefulness; the ideas of volition and extrapolation don't actually have any obvious linguistic analogues, as far as I can tell. The platonic computation of morality could be compared to universal grammar or something, but it's not a strong analogy (for the purposes we're interested in) because universal grammar doesn't have the recursive/reflective property that morality does (we can ask ourselves "Should I make this change to the algorithm I use to answer 'should' questions?", and although I suppose we could also ask ourselves if we should change our linguistic algorithms, that wouldn't loop back on itself like morality does, so the problem of "extrapolating" it wouldn't be as difficult or as interesting).)

You're of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.

State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.

It would mean using Bayesian (probabilistic) logic instead of Aristotelian logic.

My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).

EDIT: For some reason "statistical approach" leaves a bad taste in some people's mouths. If you are one of these people, I'd be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.

I've always chalked it up to the fact that you can get decent results with a model that doesn't even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...

I don't know if this is the actual reason.

Thanks. I think the reason I don't find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.

It feels like statistical methods are just giving up on getting it 'absolutely right' in favour of getting it 'good enough', and for morality, that just doesn't seem satisfactory. Maybe I'm underestimating statistical systems? I'd love to be corrected.

[-][anonymous]00

Native speakers can't be 'proven wrong' in their use of the language

I'd say they can, withing limits - they can recognize that certain ways of saying things "feel wrong", and you'll rarely find speakers of the same dialect disagreeing over whether a sentence "feels wrong" (though there are probably some border cases).

My impression is that people tend to be exposed to grammar early on in school, in the form of a lot of arbitrary-seeming rules, which do not necessarily correspond with the colloquial spoken language. In English class in elementary and high school, I was taught never to split an infinitive (maybe I should say, "to never split an infinitive") and that the verb "to be" takes the nominative -- "that is I" rather than "that's me." Later, I learned that serious academic grammar scholars tend not spend their time issuing or enforcing random rules, but rather mostly observe and analyze how people use grammar -- regional and temporal shifts in the way the language is used. In that sense, language is value-neutral. Neither French nor English is "better" than the other in a general sense, French is not just degenerate Latin, Shakespeare and Chaucer and the author of Beowulf all use the grammar of English appropriate to their times. Valley-girl English and Ebonics and West Virginia dialect are all equally valid and internally consistent, according to this approach.

Can this same analysis be applied to moral codes? If it can, even in principle, then we have some problems. As I understand it, "morality" is all about values. I think EY has considered this issue seriously, and has alluded to it in Three Worlds Collide.

My impression is that people tend to be exposed to grammar early on in school, in the form of a lot of arbitrary-seeming rules, which do not necessarily correspond with the colloquial spoken language

And indeed, they're not supposed to. "Grammar" in the sense of school consists of rules for signaling high status via speaking and writing. (The level of "arbitrariness" is what you'd expect given this.) Nothing to do with "grammar" in the sense of theoretical linguistics.

In that sense, language is value-neutral.

Linguists, however, are too hasty to jump to this conclusion in their attempt to explain that "evaluating" different language varieties is not their subject as linguists. There may be legitimate arguments (aesthetic, utilitarian, etc.) for why some forms of language are "better" than others; it's just that such arguments are strictly irrelevant from the point of view of theoretical linguistics (though not sociolingustics, etc).

rules for signaling high status via speaking and writing.

And clarity.

Nothing to do with "grammar" in the sense of theoretical linguistics.

In English, Irish and French classes I learned about parts of speech, regular and irregular verbs, cases and declensions, moods and tenses and conjugations, gender and agreement etc etc. These are not really theoretical linguistics, but they are prerequisites for it.

Valley-girl English and Ebonics and West Virginia dialect are all equally valid and internally consistent, according to this approach.

There is also value in maintaining a common standard. Try speaking Ebonics outside black ghettos and besides the status hit, people will have a hard time understanding you.

Is there any dialect that is readily understandable to everyone who speaks English?

Why not the Received Pronunciation that's promoted (and used to be required) on the BBC? Everybody's heard it, even in the U.S. (where I am). A Midwest U.S. dialect might also work, but I can't judge that as well.

Can this same analysis be applied to moral codes? If it can, even in principle, then we have some problems. As I understand it, "morality" is all about values. I think EY has considered this issue seriously, and has alluded to it in Three Worlds Collide.

You should look at the metaethics sequence.

I'm curious: Why has the parent comment received two downvotes?

Three now.

Mine was because I would like to see fewer comments here that simply suggest "Read the sequences" without suggesting why.

Mine was because I would like to see fewer comments here that simply suggest "Read the sequences" without suggesting why.

I can understand that that could sometimes be a problem. But doesn't Eugine's quotation make it clear that he was pointing to where "EY has considered [morality] seriously"?

I guess I was surprised because I'm inclined to view this kind of cross referencing as valuable. For example, it makes it easier for lurkers to follow a chain of links back to substantive explanations in the sequences. But I take your point that "Read the sequences" is way too broad to be useful, and "read the metaethics sequence" is still very broad.

But, in this case, Costanza had possibly implied an interest in Eliezer's views on morality. And, until Costanza expresses a more particular interest, the best that you can do is to direct him/her to the entire metaethics sequence.

Yes. I accept your analysis. My downvote has now been backed out.

Now you have me curious as to why the other two downvotes happened. But not curious enough to request an explanation. Karma micromanagement postmortems are not the most productive use of our time. I'm going to make an early New Year's resolution and avoid them from now on.

I interpreted that sentence as saying, essentially, "Yeah, and I think Eliezer agrees with me in general, and here's a link to another source."

...Costanza? Care to clarify your intentions?

Also, in any case, linking ver to a ~43,000 word sequence is almost certainly less efficient than simply asking if s/he was curious about Eliezer's views on morality, and, if so, what specific questions s/he had.

Sorry I haven't participated in this sub-thread...I've been too busy reading the sequences.

Just kidding. Personally, I'm pretty new to Less Wrong and I've read some, but far from most of the material that may be relevant to this and other subjects. In the meantime, I don't know what I don't know, so a reference to the sequences is not offensive. I assume everyone who has commented so far has been acting in good faith.

[-]ata90

I think it made sense in context. The comment it was replying to said "I think EY has considered this issue seriously, and has alluded to it in Three Worlds Collide", and the reply linked to his full analysis of it. It didn't sound like an unexplained "Read the sequences!" to me.

(Though "You should look at..." may not be the best way to say something like that.)

Agreed, what we learn as "grammar" in school doesn't correspond to the real rules of the language we use - either because the "rules" taught are "wrong" (like "not splitting infinitives"), or they're teaching a different language from the one you learned at home - in fact, I suspect this last one is the main reason schools feature prominently teaching something that people learn naturally : it's part of an effort from central governments to impose a common language to the whole nation, instead of the local dialects that existed before widespread schooling. Teaching grammar rules explicitly makes sense if you're teaching a second language (which was the case for Welsh kids learning English, or Basque kids learning French).

here's a linguist writing about something similar:

The fact is that almost everything most educated Americans believe about English grammar is wrong. In part this is because of misconceptions concerning the facts. In part it is because hopeless descriptive classifications and antiquated theoretical assumptions doom all discussion to failure. Amazingly, almost nothing has changed in over a hundred years.

... though his conclusion seems to be "teach the correct rules of grammar at school", whereas mine is more "what's the point? People figure out the rules independently of what factoids they're forced to memorize in school".

"what's the point? People figure out the rules independently of what factoids they're forced to memorize in school".

If there is a point in teaching biology, why not teach grammar, in a descriptive way? That language has some structure which can be analysed is something most people would not spontaneously realise, and even if they did, they would not have the common terminology. Without learning about grammar of my native language it would be far harder for me to understand the explanations of grammar of foreign languages.

Agreed, making it easier to learn foreign languages is an important function. As is learning to reason about abstract rule systems in general. And preventing the language from splitting into dialects as they do when left to themselves (though I suspect modern media makes this much less of a risk than it was 200 years ago).

I think what I'm mostly railing against is lost purposes - we teach things to kids without why being at the front of our minds. Which means that when the needs change (for example, once local dialects are pretty much eliminated in your country, as seems to be the case with France and Russia), you don't adjust the strategy accordingly. And you don't look as hard for alternate ways of reaching that goal.

For example, if the main advantage of grammar classes is that they make it easier to learn foreign languages, it might be useful to integrate that aspect in grammar lessons; once the basics are covered, instead of teaching more complex concepts (like subjunctives in French), explain how those basics are different in other languages the kids are already learning (this would make sense for France - the grammar is complicated and foreign languages are introduced early because of the EU; it may not make as much sense for the US).

(Note that I don't have a very good grasp of the teaching of English grammar, I'm mostly working for my experience from French. I was mostly schooled in France, and spoke English at home , so I don't know how much grammar is inflicted on English-speakers at school)

Teaching grammar is probably not the same thing as teaching the standard variant of the language. People can be told what an infinitive is without being commanded not to split it. On the other hand, they can learn to speak like the British queen simply by observing the queen, without being taught how to tell apart subject and object.

The purpose of the former is to promote understanding of how language works, and maybe to help people to structure their sentences and speak more clearly (if needed) or make foreign language learning easier.

As for the latter, the purpose might be dialect elimination, but I am not sure. I don't know how much exactly the local dialects are eliminated in France or Russia, but in the Czech Republic, where I come from, the local dialects are not eliminated. There is complete mutual intelligibility between all dialects (and almost complete mutual intelligibility with Slovak language and its dialects), but the difference between how most people really speak and the standard language taught in schools is substantial. I have read that such (or even greater) diglossia exists also within Finnish, all variants of Arabic, Greek or Welsh.

Dialect elimination was an important reason probably in France where language unfication was an openly admitted political goal and "patois" have had very low status, but I suspect not everywhere. (France still has problems with recognising minority languages and has probably the most restrictive language policy in Europe.) Since I have not been schooled in France, I have never been told in school that dialects are bad or that I should speak always the literary standard. The standard was intended mostly for writing and official communication (e.g. TV news) and in private communication people are expected to use whatever dialect they speak natively.

I think that importance of teaching a standard language is probably overrated, but still it does probably make language changes slower, which can be good since future generations will need translations of today's texts later (maybe not an important issue since we will have good automatic translators soon). People also tend to treat the standard form of whatever language they speak as a terminal value, which is harder to argue about.

The learning foreign languages point is a good one. Knowing the descriptive principles seems far more useful for that purpose than for anything to do with actual usage of the primary language itself. (Second most important use: knowing which words are acceptable when playing Scrabble!)

it's part of an effort from central governments to impose a common language to the whole nation, instead of the local dialects that existed before widespread schooling.

This is not necessarily a bad thing. How useful a language is to you depends on how many other people speak it.

I fully agree!

Western governments seem somewhat reticent to talk about how they crushed local dialects in the name of "education" (for understandable political reasons - it doesn't sound very good when presented like that, and gives arguments to local separatists); maybe it would be better if they just 'fessed up and said "Oh OK we admit those "grammar lessons" in school were just a pretext to impose linguistic uniformity; not that we have that we can drop those pointless lessons".

What would happen if the government dropped grammar lessons. Is the parents who could afford to would arrange for special grammar tutors for their children. The version taught by those tutors would then become a signal of high status and other parents would demand that their children be taught it as well.

Would they really? I'm not a parent, but I at least like to think I'd spend extra money teaching my kids useful things that are also status signals, like economics or calculus or writing (real writing, not "don't split infinitives"). Basically anything you could easily get tutoring for is a better use of time and money than grammar education.

And are the kids going to give everyone they meat a lecture on calculus?

Also, the rules probably wouldn't include "don't split infinitives". Using that as your mental example is skewing your intuition.

Notice that on most internet forums posts with bad spelling and grammar are taken less seriously. This is because readers see that they signal low quality content.

And are the kids going to give everyone they meat a lecture on calculus?

I'm wondering if this was deliberate, to illustrate your point.

If so, bravo, it worked really well.

Actually, it was a typo, but now that you mentioned it, it does help, so I'll leave it up.

Would you still say that if you lived in an area where the local, everyday language was of exceptionally low status - e.g. Ebonics, Brummie, or Neapolitan?

I don't think the conclusion is "teach the correct rules of grammar at school". The people at Language Log are thoroughly descriptivist as I suspect is most of modern linguistics.

That's not a contradiction. "Teach the correct rules of grammar at school" doesn't mean "teach people how to speak properly according to some prescriptive standard", it means "teach people (something approximating) the actual syntax of the English language so they can discuss it sensibly and understand something of what is actually happening when they put sentences together". (As opposed to the currently commonly taught "traditional grammar" which attempts to describe English but just gets it wrong.)

I agree they're descriptivist; by "the correct rules" I meant "the rules that govern the way people actually speak" (in which case "teaching the rules" would be more "showing people what the rules are" and not "instructing people which rules they should follow", the same way a biology class on digestion doesn't tell people how to digest.)

But then I haven't done an in-depth research on the education policy advocated on Language Log.

I started reading Pinker's Words and Rules a few months ago (I didn't actually do more than start then). He makes this point - apparently there are two distinct language systems running in parallel - one that simply regurgitates words (which accounts for irregulars, and the fact that very common words are more likely to be irregular) and a rule system that is more flexible but that has higher processing overhead. (Thanks for reminding me, I really need to get back and finish it.)

I'm currently reading Pinker's The Stuff of Thought, it's pretty interesting so far; pretty interesting so far - he compares different theories on linguistic determinism and how they fit in with various observations; he talks about which concepts are likely to be somewhat inborn, etc.

Note that when we learn correct grammar, we sometimes also learn about important real-world concepts. For example, one must understand the distinction between countable vs. uncountable quantities before one can understand when it is appropriate to use the word "fewer" instead of the word "less".

Note also that although the distinction between count-nouns and mass-nouns is a real one, a) it does not always match up with some physical difference between a countable and uncountable quantity, and b) the people who insist that "less" should only ever be used with mass-nouns are talking nonsense. "Less" with count-nouns is found all over the place in the speech and writing of competent language users.

For a) it would be good to see an example. For b), "should only ever be used" of course sounds prescriptivist; nevertheless, the distinction exists, even if it's often blurred, and countable v uncountable is the right (and only) way to explain it.

One common linguistic treadmill is → → . The clearest example in English is ‘agenda’ (originally the plural of ‘agendum’, an obsolete word in place of which we now use ‘agenda item’), which has now generated its own plural. Other examples: ‘algae’ (still the plural of ‘alga’ to biologists but a mass noun to layfolk) and ‘virus’ (which went backwards when we learnt more about the subject; originally a mass noun, now a count noun with a plural).

Any specific usage can be justified on its own terms, but the whole process is the old philosophical problem of distinguishing a set of things from the things themselves.

Interesting food for thought. I think you have a small typo though. You write:

"if Alfred unintentionally Barry"

Is there a verb missing?

Yes, fixed, thanks.

[-][anonymous]00

But unlike grammar, people don't always agree on right and wrong : if Alfred unintentionally Barry, Barry is more likely to think that what Alfred did was morally wrong, even if both started off with similar moral intuitions.

Er...is that supposed to be a joke?

Nope, a typo - yes, yes, I know, I should reread what I write. The problem is that the result of rereading is frequently "hmm I could improve this sentence by deleting this and replacing it by that", and in this case there was a casualty.

More likely, a typo.

[-][anonymous]00

Probably. But it certainly was a well-timed typo given the content of the paragraph.