Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity:
This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages.
After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex.
And there is more:
One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.
The researchers aren't entirely sure why speaking in a less familiar tongue makes people more "rational", in the sense of not being affected by framing effects or loss aversion. But they think it may have to do with creating psychological distance, encouraging systematic rather than automatic thinking, and with reducing the emotional impact of decisions. This would certainly fit with past research that's shown the emotional impact of swear words, expressions of love and adverts is diminished when they're presented in a less familiar language.
Paywalled article (can someone with access throw a PDF up on dropbox or something?): http://pss.sagepub.com/content/early/2012/04/18/0956797611432178
Almost all instances of the word "true" can be eliminated from the sentences in which they appear by applying Tarski's formula. For example, if you say, "I believe the sky is blue, and that's true!" then this can be rephrased as the statement, "I believe the sky is blue, and the sky is blue." For every "The sentence 'X' is true" you can just say X and convey the same information about what you believe - just talk about the territory the map allegedly corresponds to, instead of talking about the map.
When can't you eliminate the word "true"? When you're generalizing over map-territory correspondences, e.g., "True theories are more likely to make correct experimental predictions." There's no way to take the word 'true' out of that sentence because it's talking about a feature of map-territory correspondences in general.
Similarly, you can eliminate the sentence 'rational' from almost any sentence in which it appears. "It's rational to believe the sky is blue", "It's true that the sky is blue", and "The sky is blue", all convey exactly the same information about what color you think the sky is - no more, no less.
When can't you eliminate the word "rational" from a sentence?
When you're generalizing over cognitive algorithms for producing map-territory correspondences (epistemic rationality) or steering the future where you want it to go (instrumental rationality). So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence "It's epistemically rational to increase belief in hypotheses that make successful experimental predictions." You can Taboo the word, of course, but then the sentence just becomes, "To increase map-territory correspondences, follow the cognitive algorithm of increasing belief in hypotheses that make successful experimental predictions." You can eliminate the word, but you can't eliminate the concept without changing the meaning of the sentence, because the primary subject of discussion is, in fact, general cognitive algorithms with the property of producing map-territory correspondences.
The word 'rational' should never be used on any occasion except when it is necessary, i.e., when we are discussing cognitive algorithms as algorithms.
If you want to talk about how to buy a great car by applying rationality, but you're primarily talking about the car rather than considering the question of which cognitive algorithms are best, then title your post Optimal Car-Buying, not Rational Car-Buying.
Thank you for observing all safety precautions.
The short version is that if the language you speak requires different verbs for the present and the future, it causes you to think about it differently. Depending on the magnitude of the effect, this has important implications for construal level theory. If your language allows you to think about the future in Near mode, it may allow you to think about it more rationally.
Previous discussion on one of Keith Chen's papers here.
The language you speak may affect how you approach your finances, according to a working paper by economist Keith Chen (seen via posts by Frances Woolley at the Worthwhile Canadian Initiative and Economy Lab). It appears that languages that require more explicit future tense are associated with lower savings. A few interesting quotes from a quick glance:
...[I]n the World Values Survey a language’s FTR [Future-Time Reference] is almost entirely uncorrelated with its speakers’ stated values towards savings (corr = -0.07). This suggests that the language effects I identify operate through a channel which is independent of conscious attitudes towards savings. [emphasis mine]
Something else that I wasn't previously aware of:
Lowenstein (1988) finds a temporal reference-point effect: people demand much more compensation to delay receiving a good by one year, (from today to a year from now), than they are willing to pay to move up consumption of that same good (from a year from now to today).
[I'd put this in an open thread, but those don’t seem to happen these days, and while this is a quote it isn't a Rationality Quote.]
You know, one of the really weird things about us human beings […] is that we have somehow created for ourselves languages that are just a bit too flexible and expressive for our brains to handle. We have managed to build languages in which arbitrarily deep nesting of negation and quantification is possible, when we ourselves have major difficulties handling the semantics of anything beyond about depth 1 or 2. That is so weird. But that's how we are: semantic over-achievers, trying to use languages that are quite a bit beyond our intellectual powers.
This seems like it might lead to something interesting to say about the design of minds and the usefulness of generalization/abstraction, or perhaps just a good sound bite.
Hello Less Wrongers.
I'm still relatively new to the LW community, but I would like to share with you a few comments and ideas for making LW a better place for non-native English speakers.
There are two classes of people among non-native English speakers (of course, those boundaries are fuzzy) :
- People who, like me, are relatively fluent in English, but not who don't have the same fluency at English as natives do.
- People who don't speak much English at all.
The problems are of course different between 1. and 2., but yet I can see ways to improve things to both categories.
Moderately fluent English speakers
Being a member of 1., here are my feelings after a few months of lurking and then trying to participate a bit in LW, from my own French pov :
- LW is still quite US-centric in many ways. That's not much of a problem, at least for me who is used to dealing with US citizen from IRC or other Internet places, but it still something to keep in mind. Political question of Yvain's census/survey (please Yvain don't take that personally, overall you're doing a great and useful thing with that survey, so thanks to you), your is a clear example of that, but it's much more general.
- Writing an article on LW is not easy for non-native English speakers. I tried twice, and twice I got many remarks about my English skills. I don't take them badly, thanks for those who took the time to point to my mistakes and explain them so I can improve, but still, it feels like it's harder to participate.
I don't have any magical solution from 1., except for anyone to try to be more careful when stating things which are culture-dependant, but it's part of the most general problem of excepting short inferential distances.
For 2., I'm wondering if it would be possible to have some LW to volunteer to review articles done by non-native English speakers, and improve the English quality, before the article is published to LW in general. Do you think the idea is good overall ? Would any of you volunteer to do that ? If so, it would be nice to include a paragraph about it, or at least a link to a page explaining the modality (how to submit an article to that team, ...), on the Welcome to Less Wrong page.
I don't think non-English speakers (or people with only basic English skills) can reasonably participate on LW itself, of course. But there are ways to still be able to offer them ways to become stronger, I'm thinking about translation.
Right now I'm helping Adrien with the French translation of HP:MoR. There are also attempts to translate some parts of the Sequences into other languages. In the mirror way of the "having native English speaker to help correct the English of non-native", us the non-native can help by participating to the various translation efforts. But that give raise to several questions :
- What are the legal issues about translating HP:MoR and Sequences ? Since Eliezer is linking to the translations of HP:MoR from his fanfiction.net page, I guess he approves of them. But what of the Sequences ? It would be nice to have some official stance from him and other people writing in the Sequences to know what they feel about translations. I'm not a fan of the copyright system in general, but I still would consider an utter lack of respect to someone to translate his work against his will.
- How can the team be coordinated, and how can newcomers to Less Wrong know that efforts are underway and requiring help ? Once again, I think it would be nice to have some page (maybe on the wiki ?) with the undergoing efforts, who is participating in which, how to contact them, and have a link to it from the Welcome to Less Wrong page.
Any opinion on those suggestions ? Any volunteer for joining some of the teams ? Anyone from "the staff" who could answer about the legal issues, and about the opportunity of including those pointers in the "Welcome to Less Wrong" page ?
Rationality requires intelligence, and the kind of intelligence that we use (for communication, progress, FAI, etc.) runs on language.
It seems that the place we should start is optimizing language for intelligence and rationality. One of SIAI's proposals includes using Lojban to interface between humans and an FAI. And of course, I should hope the programming language used to build a FAI would be "rational". But it would seem to me that the human-generated priors, correct epistemic rationality, decision theory, metaethics, etc. all depend on using a language that sufficiently rigorously maps to our territory.
Are "naturally evolved" languages such as English sufficient, with EY-style taboos and neologisms? Or are they sick to the core?
Please forgive and point me towards previous discussion or sequences about this topic.
Allow me to propose a thought experiment. Suppose you, and you alone, were to make first contact with an alien species. Since your survival and the survival of the entire human race may depend on the extraterrestrials recognizing you as a member of a rational species, how would you convey your knowledge of mathematics, logic, and the scientific method to them using only your personal knowledge and whatever tools you might reasonably have on your person on an average day?
When I thought of this question, the two methods that immediately came to mind were the Pythagorean Theorem and prime number sequences. For instance, I could draw a rough right triangle and label one side with three dots, the other with four, and the hypotenuse with five. However, I realized that these are fairly primitive maths. After all, the ancient Greeks knew of them, and yet had no concept of the scientific method. Would these likely be sufficient, and if not what would be? Could you make a rough sketch of the first few atoms on the periodic table or other such universal phenomena so that it would be generally recognizable? Could you convey a proof of rationality in a manner that even aliens who cannot hear human vocalizations, or see in a completely different part of the EM spectrum? Is it even in principle possible to express rationality without a common linguistic grounding?
In other words, what is the most rational thought you could convey without the benefit of common language, culture, psychology, or biology, and how would you do it?
Bonus point: Could you convey Bayes' theorem to said ET?
One of the shiniest ideas I picked up from LW is inferential distance. I say "shiny" because the term, so far as I'm aware, has no clear mathematical or pragmatic definition, no substantive use in peer reviewed science, but was novel to me and appeared to make a lot of stuff about the world suddenly make sense. In my head it is marked as "super neat... but possibly a convenient falsehood". I ran across something yesterday that struck me a beautifully succinct and helpful towards resolving the epistemic status of the concept of "inferential distance".
Based on the community's continuing interests in diet and religion, I'd like to point out this blog post by the coauthor of Protein Power, Michael Eades, wherein he suggests that biblical literalism tends toward a low-fat approach to nutrition over a low-carb philosophy, by essentially throwing out a bunch of evidence on the matter:
Why, you might ask, is this scientist so obdurate in the face of all the evidence that’s out there? Perhaps because much of the evidence isn’t in accord with his religious beliefs. I try never to mention a person’s religious faith, but when it impacts his scientific thinking it at least needs to be made known. Unless he’s changed his thinking recently, Dr. Eckel apparently is one of the few academic scientists who are literal interpreters of the bible. I assume this because Dr. Eckel serves on the technical advisory board of the Institution for Creation Research, an organization that believes that not only is the earth only a few thousand years old , but that the entire universe in only a few thousand years old. And they believe that man was basically hand formed by God on the sixth day of creation. And Dr. Eckel’s own writings on the subject appear to confirm his beliefs
Of all the evidence that exists, I think the evolutionary/natural selection data and the anthropological data are the most compelling because they provide the largest amount of evidence over the longest time. To Dr. Eckel, however, these data aren’t applicable because in his worldview prehistoric man didn’t exist and therefore wasn’t available to be molded by the forces of natural selection. I haven’t a clue as to what he thinks the fossil remains of early humans really were or where they came from. Perhaps he believes – as I once had it explained to me by a religious fundamentalist – these fossilized remains of dinosaurs, extinct ancient birds and mammals and prehistoric man were carefully buried by the devil to snare the unwary and the unbeliever. If this is the case, I guess I’ll have to consider myself snared.
In Dr. Eckel’s view, man was created post agriculturally. In fact, in his view, there was never an pre-agricultural era, so how could man have failed to adapt to agriculture?
While there's a clear persuasive agenda here and I won't present a full analysis of the situation, Eades also mentions biasing use of language earlier in the article. In particular, beware applause lights and confirmation bias in evaluating.
When I came to read this passage "…floating and flailing weightlessly.…" I said the word "weightlessly" as "walterkly". It took quite a bit of effort to be fully sure that this was a mistake; and more effort and repeating to grasp what exactly this nonsense word was, to establish its sound – I had to construct it phoneme by phoneme – clearly enough to write it down. And it seems that the reading eye, darting backwards and forwards, was plucking letters from the whole vicinity, and mixing them up, having lost its usual ability to sort them.What the whole thing emphasises, of course, is how what we call self-command is really a matter of having reliable automatic mechanisms, unthinking habits or instincts.
I hope this is a good place for this - comments/suggestions welcome - offers of collaboration more than welcome!
I envisage a kind of structured wiki, centred around the creation of propositions, which can be linked to allow communities of interest to rapidly come to fairly sophisticated levels of mutual understanding; the aim being to foster the development of strong groups with confidence in shared, conscious positions. This should allow significant confidence in collaboration.
Some aspects, in no particular order;
- Propositions are made by users, and are editable by users - as in a wiki
- Each proposition could be templated - the inspiration for the template being the form adopted by Chris. Alexander et al in 'A Pattern Language', namely;
- TITLE (referenced)(confidence level)
- context - including links to other propositions within whose sphere this one might operate
- STATEMENT OF PROBLEM/PURPOSE OF PROPOSITION
- CONCLUSION - couched in parametric/generic/process based terms
- links to other propositions for which this proposition is the context
- Some mechanism for users to make public their degree of acceptance of each proposition
- Some mechanism for construction by individuals/groups of networks of propositions specific to particular users/groups (in other words, the links referred to in 3. and 7. above might be different for different users/groups) These networks can work like Pattern Languages that address particular fields / ethical approaches / political or philosophical positions / projects
- Some mechanism for assignment by users/groups of tiered structure to proposition networks (to allow for distinctions to be made between fundamental, large scale propositions and more detailed, peripheral ones)
- Some mechanism for individual users to form associations with other users/established groups who are subscribing to the same propositions
- Some mechanism for community voting/karma to promote individuals to assume stewardship of groups
Enough of these for now. Some imagined interactions might be more helpful;
- I stumble across the site (as I stumbled across LessWrong), and browse proposition titles. I come across one called 'Other people are real, just like me'. It contains some version of the argument for accepting that other humans are to be assumed to have roughly the same motivations, needs et al, as me, and the suggestion that this is a useful founding block for a rational morality. I decide to subscribe, fairly strongly. I am offered a tailored selection of related propositions, as identified by the groups that have included this proposition in their networks (without identification of said groups, I rather think) - I investigate these, and at some point, the system feels that my developing profile is beginning to match that of some group or groups - and offers me the chance to look at their 'mission statement' pages. I decide to come back another day and look at other propositions included in these groups' networks, before going any further. I decline to have my profile made public, so that the groups don't contact me.
- I come across some half-baked, but interesting proposition. As a registered user, but not the originator of the proposition, I have some choices; I can comment on the proposition, hoping to engage in dialogue with the proposer that could be fruitful, or I can 'clone' (or 'fork') the proposition, and seek to improve it myself. Ultimately, the interest of other users will determine the influence and relevance of the proposition.
I am a fundamentalist christian (!). I come across the site, and am appalled at its secular, materialist tone. I make a new proposition; 'The Bible is revealed truth, in all its glory' (or some such twaddle. Of course, I omit to specify which edition, and don't even consider the option of a language other than english - but hey, what do you expect?). Within days, I have assembled a wonderful active group of woolly minded people happily discussing the capacity of Noah's Ark, or whatever. The point here is that the platform is just that - a platform. Human community is a Good Thing.
- I am pushed upward by the group I am part of to some sort of moderator role. The system shows various other groups who agree more or less strongly with most of the propositions our group deems fundamental. I contact my opposite number in one of those, and we together make a new proposition which we believe could be a vehicle for discussions that could lead to a merger.
- I wish to write a business plan that is not a pile of dead tree gathering dust 6 weeks after it was presented to the board. I attempt to set out the aims of the business as fundamental propositions, and advertise this network to my colleagues, who suggest refinements. On this basis, we work up a description of the important policies and 'business rules' which define the enterprise. These remain accessible and editable , so that they can evolve along with the business.
- I am considering an open-source project. I set out the fundamental aims and characteristics of the tool I am proposing, and link them together. The system allows me to set myself up as a group. I sit back and wait for others to comment. Based on these comments, the propositions are refined, others added, relationships built with potential collaborators. At some point, we form a group, and the project gets under way. Throughout its life, the propositions are continually refined and added to. The propositions are a useful form of marketing, and save us a great deal of bother talking to people who want to know what/why/how.
Enough... Point 6 is almost recursive.......
There is more discursive (and older) material, here.
Thanks for reading, and please do comment.