The value of ambiguous speech
This was going to be a reply in a discussion between ChristianKl and MattG in another thread about conlangs, but their discussion seemed to have enough significance, independent of the original topic, to deserve a thread of its own. If I'm doing this correctly (this sentence is an after-the-fact update), then you should be able to link to the original comments that inspired this thread here: http://lesswrong.com/r/discussion/lw/n0h/linguistic_mechanisms_for_less_wrong_cognition/cxb2
Is a lack of ambiguity necessary for clear thinking? Are there times when it's better to be ambiguous? This came up in the context of the extent to which a conlang should discourage ambiguity, as a means of encouraging cognitive correctness by its users. It seems to me that something is being taken for granted here, that ambiguity is necessarily an impediment to clear thinking. And I certainly agree that it can be. But if detail or specificity are the opposites of ambiguity, then surely maximal detail or specificity is undesirable when the extra information isn't relevant, so that a conlang would benefit from not requiring users to minimize ambiguity.
Moving away from the concept of conlangs, this opens up some interesting (at least to me) questions. Exactly what does "ambiguity" mean? Is there, for each speech act, an optimal level of ambiguity, and how much can be gained by achieving it? Are there reasons why a certain, minimal degree of ambiguity might be desirable beyond avoiding irrelevant information?
Linguistic mechanisms for less wrong cognition
I'm working on a conlang (constructed language) and would like some input from the Less Wrong community. One of the goals is to investigate the old Sapir-Whorf hypothesis regarding language affecting cognition. Does anyone here have any ideas regarding linguistic mechanisms that would encourage more rational thinking, apart from those that are present in the oft-discussed conlangs e-prime, loglan, and its offshoot lojban? Or perhaps mechanisms that are used in one of those conlangs, but might be buried too deeply for a person such as myself, who only has superficial knowledge about them, to have recognized? Any input is welcomed, from other conlangs to crazy ideas.
Values at compile time
A putative new idea for AI control; index here.
This is a simple extension of the model-as-definition and the intelligence module ideas. General structure of these extensions: even an unfriendly AI, in the course of being unfriendly, will need to calculate certain estimates that would be of great positive value if we could but see them, shorn from the rest of the AI's infrastructure.
It's almost trivially simple. Have the AI construct a module that models humans and models human understanding (including natural language understanding). This is the kind of thing that any AI would want to do, whatever its goals were.
Then take that module (using corrigibility) into another AI, and use it as part of the definition of the new AI's motivation. The new AI will then use this module to follow instruction humans give it in natural language.
Too easy?...
This approach essentially solves the whole friendly AI problem, loading it onto the AI in a way that avoids the whole "defining goals (or meta-goals, or meta-meta-goals) in machine code" or the "grounding everything in code" problems. As such it is extremely seductive, and will sound better, and easier, than it likely is.
I expect this approach to fail. For it to have any chance of success, we need to be sure that both model-as-definition and the intelligence module idea are rigorously defined. Then we have to have a good understanding of the various ways how the approach might fail, before we can even begin to talk about how it might succeed.
The first issue that springs to mind is when multiple definitions fit the AI's model of human intentions and understanding. We might want the AI to try and accomplish all the things it is asked to do, according to all the definitions. Therefore, similarly to this post, we want to phrase the instructions carefully so that a "bad instantiation" simply means the AI does something pointless, rather than something negative. Eg "Give humans something nice" seems much safer than "give humans what they really want".
And then of course there's those orders where humans really don't understand what they themselves want...
I'd want a lot more issues like that discussed and solved, before I'd recommend using this approach to getting a safe FAI.
Talking to yourself: A useful thinking tool that seems understudied and underdiscussed
I have returned from a particularly fruitful Google search, with unexpected results.
My question was simple. I was pretty sure that talking to myself aloud makes me temporarily better at solving problems that need a lot of working memory. It is a thinking tool that I find to be of great value, and that I imagine would be of interest to anyone who'd like to optimize their problem solving. I just wanted to collect some evidence on that, make sure I'm not deluding myself, and possibly learn how to enhance the effect.
This might be just lousy Googling on my part, but the evidence is surprisingly unclear and disorganized. There are at least three seperate Wiki pages for it. They don't link to each other. Instead they present the distinct models of three seperate fields: autocommunication in communication studies, semiotics and other cultural studies, intrapersonal communication ("self-talk" redirects here) in anthropology and (older) psychology and private speech in developmental psychology. The first is useless for my purpose, the second mentions "may increase concentration and retention" with no source, the third confirms my suspicion that this behavior boosts memory, motivation and creativity, but it only talks about children.
Google Scholar yields lots of sports-related results for "self-talk" because it can apparently improve the performance of athletes and if there's something that obviously needs the optimization power of psychology departments, it is competitive sports. For "intrapersonal communication" it has papers indicating it helps in language acquisition and in dealing with social anxiety. Both are dwarfed by the results for "private speech", which again focus on children. There's very little on "autocommunication" and what is there has nothing to do with the functioning of individual minds.
So there's a bunch of converging pieces of evidence supporting the usefulness of this behavior, but they're from several seperate fields that don't seem to have noticed each other very much. How often do you find that?
Let me quickly list a few ways that I find it plausible to imagine talking to yourself could enhance rational thought.
- It taps the phonological loop, a distinct part of working memory that might otherwise sit idle in non-auditory tasks. More memory is always better, right?
- Auditory information is retained more easily, so making thoughts auditory helps remember them later.
- It lets you commit to thoughts, and build upon them, in a way that is more powerful (and slower) than unspoken thought while less powerful (but quicker) than action. (I don't have a good online source for this one, but Inside Jokes should convince you, and has lots of new cognitive science to boot.)
- System 1 does seem to understand language, especially if it does not use complex grammar - so this might be a useful way for results of System 2 reasoning to be propagated. Compare affirmations. Anecdotally, whenever I'm starting a complex task, I find stating my intent out loud makes a huge difference in how well the various submodules of my mind cooperate.
- It lets separate parts of your mind communicate in a fairly natural fashion, slows each of them down to the speed of your tongue and makes them not interrupt each other so much. (This is being used as a psychotherapy method.) In effect, your mouth becomes a kind of talking stick in their discussion.
All told, if you're talking to yourself you should be more able to solve complex problems than somebody of your IQ who doesn't, although somebody of your IQ with a pen and a piece of paper should still outthink both of you.
Given all that, I'm surprised this doesn't appear to have been discussed on LessWrong. Honesty: Beyond Internal Truth comes close but goes past it. Again, this might be me failing to use a search engine, but I think this is worth more of our attention that it has gotten so far.
I'm now almost certain talking to myself is useful, and I already find hindsight bias trying to convince me I've always been so sure. But I wasn't - I was suspicious because talking to yourself is an early warning sign of schizophrenia, and is frequent in dementia. But in those cases, it might simply be an autoregulatory response to failing working memory, not a pathogenetic element. After all, its memory enhancing effect is what the developmental psychologists say the kids use it for. I do expect social stigma, which is why I avoid talking to myself when around uninvolved or unsympathetic people, but my solving of complex problems tends to happen away from those anyway so that hasn't been an issue really.
So, what do you think? Useful?
The rational way to name rivers
I just read this in the Wikipedia article on the Mattaponi River and it really tickled me. If only all language were so rational!
- The Mat River and the Ta River join in Spotsylvania County to form the Matta River;
- The Po River and the Ni River join in Caroline County to form the Poni River;
- The Matta River and the Poni River join in Caroline County to form the Mattaponi River.
[Link] Learning New Languages Helps The Brain Grow
http://www.lunduniversity.lu.se/o.o.i.s?news_item=5928&id=24890
According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity:
This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages.
After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex.
And there is more:
One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.
Framing a problem in a foreign language seems to reduce decision biases
The researchers aren't entirely sure why speaking in a less familiar tongue makes people more "rational", in the sense of not being affected by framing effects or loss aversion. But they think it may have to do with creating psychological distance, encouraging systematic rather than automatic thinking, and with reducing the emotional impact of decisions. This would certainly fit with past research that's shown the emotional impact of swear words, expressions of love and adverts is diminished when they're presented in a less familiar language.
Paywalled article (can someone with access throw a PDF up on dropbox or something?): http://pss.sagepub.com/content/early/2012/04/18/0956797611432178
Blog summary: http://bps-research-digest.blogspot.co.uk/2012/06/we-think-more-rationally-in-foreign.html
Only say 'rational' when you can't eliminate the word
Almost all instances of the word "true" can be eliminated from the sentences in which they appear by applying Tarski's formula. For example, if you say, "I believe the sky is blue, and that's true!" then this can be rephrased as the statement, "I believe the sky is blue, and the sky is blue." For every "The sentence 'X' is true" you can just say X and convey the same information about what you believe - just talk about the territory the map allegedly corresponds to, instead of talking about the map.
When can't you eliminate the word "true"? When you're generalizing over map-territory correspondences, e.g., "True theories are more likely to make correct experimental predictions." There's no way to take the word 'true' out of that sentence because it's talking about a feature of map-territory correspondences in general.
Similarly, you can eliminate the sentence 'rational' from almost any sentence in which it appears. "It's rational to believe the sky is blue", "It's true that the sky is blue", and "The sky is blue", all convey exactly the same information about what color you think the sky is - no more, no less.
When can't you eliminate the word "rational" from a sentence?
When you're generalizing over cognitive algorithms for producing map-territory correspondences (epistemic rationality) or steering the future where you want it to go (instrumental rationality). So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence "It's epistemically rational to increase belief in hypotheses that make successful experimental predictions." You can Taboo the word, of course, but then the sentence just becomes, "To increase map-territory correspondences, follow the cognitive algorithm of increasing belief in hypotheses that make successful experimental predictions." You can eliminate the word, but you can't eliminate the concept without changing the meaning of the sentence, because the primary subject of discussion is, in fact, general cognitive algorithms with the property of producing map-territory correspondences.
The word 'rational' should never be used on any occasion except when it is necessary, i.e., when we are discussing cognitive algorithms as algorithms.
If you want to talk about how to buy a great car by applying rationality, but you're primarily talking about the car rather than considering the question of which cognitive algorithms are best, then title your post Optimal Car-Buying, not Rational Car-Buying.
Thank you for observing all safety precautions.
[link] TEDxYale - Keith Chen - The Impact of Language on Economic Behavior
http://www.youtube.com/watch?v=CiobJhogNnA
The short version is that if the language you speak requires different verbs for the present and the future, it causes you to think about it differently. Depending on the magnitude of the effect, this has important implications for construal level theory. If your language allows you to think about the future in Near mode, it may allow you to think about it more rationally.
Previous discussion on one of Keith Chen's papers here.
Sapir-Whorf , Savings, and Discount Rates [Link]
The language you speak may affect how you approach your finances, according to a working paper by economist Keith Chen (seen via posts by Frances Woolley at the Worthwhile Canadian Initiative and Economy Lab). It appears that languages that require more explicit future tense are associated with lower savings. A few interesting quotes from a quick glance:
...[I]n the World Values Survey a language’s FTR [Future-Time Reference] is almost entirely uncorrelated with its speakers’ stated values towards savings (corr = -0.07). This suggests that the language effects I identify operate through a channel which is independent of conscious attitudes towards savings. [emphasis mine]
Something else that I wasn't previously aware of:
Lowenstein (1988) finds a temporal reference-point effect: people demand much more compensation to delay receiving a good by one year, (from today to a year from now), than they are willing to pay to move up consumption of that same good (from a year from now to today).
Tidbit: “Semantic over-achievers”
[I'd put this in an open thread, but those don’t seem to happen these days, and while this is a quote it isn't a Rationality Quote.]
You know, one of the really weird things about us human beings […] is that we have somehow created for ourselves languages that are just a bit too flexible and expressive for our brains to handle. We have managed to build languages in which arbitrarily deep nesting of negation and quantification is possible, when we ourselves have major difficulties handling the semantics of anything beyond about depth 1 or 2. That is so weird. But that's how we are: semantic over-achievers, trying to use languages that are quite a bit beyond our intellectual powers.
— Geoffrey K. Pullum, Language Log, “Never fails: semantic over-achievers”, December 1, 2011
This seems like it might lead to something interesting to say about the design of minds and the usefulness of generalization/abstraction, or perhaps just a good sound bite.
Less Wrong and non-native English speakers
Hello Less Wrongers.
I'm still relatively new to the LW community, but I would like to share with you a few comments and ideas for making LW a better place for non-native English speakers.
There are two classes of people among non-native English speakers (of course, those boundaries are fuzzy) :
- People who, like me, are relatively fluent in English, but not who don't have the same fluency at English as natives do.
- People who don't speak much English at all.
The problems are of course different between 1. and 2., but yet I can see ways to improve things to both categories.
Moderately fluent English speakers
Being a member of 1., here are my feelings after a few months of lurking and then trying to participate a bit in LW, from my own French pov :
- LW is still quite US-centric in many ways. That's not much of a problem, at least for me who is used to dealing with US citizen from IRC or other Internet places, but it still something to keep in mind. Political question of Yvain's census/survey (please Yvain don't take that personally, overall you're doing a great and useful thing with that survey, so thanks to you), your is a clear example of that, but it's much more general.
- Writing an article on LW is not easy for non-native English speakers. I tried twice, and twice I got many remarks about my English skills. I don't take them badly, thanks for those who took the time to point to my mistakes and explain them so I can improve, but still, it feels like it's harder to participate.
I don't have any magical solution from 1., except for anyone to try to be more careful when stating things which are culture-dependant, but it's part of the most general problem of excepting short inferential distances.
For 2., I'm wondering if it would be possible to have some LW to volunteer to review articles done by non-native English speakers, and improve the English quality, before the article is published to LW in general. Do you think the idea is good overall ? Would any of you volunteer to do that ? If so, it would be nice to include a paragraph about it, or at least a link to a page explaining the modality (how to submit an article to that team, ...), on the Welcome to Less Wrong page.
Non-English speakers
I don't think non-English speakers (or people with only basic English skills) can reasonably participate on LW itself, of course. But there are ways to still be able to offer them ways to become stronger, I'm thinking about translation.
Right now I'm helping Adrien with the French translation of HP:MoR. There are also attempts to translate some parts of the Sequences into other languages. In the mirror way of the "having native English speaker to help correct the English of non-native", us the non-native can help by participating to the various translation efforts. But that give raise to several questions :
- What are the legal issues about translating HP:MoR and Sequences ? Since Eliezer is linking to the translations of HP:MoR from his fanfiction.net page, I guess he approves of them. But what of the Sequences ? It would be nice to have some official stance from him and other people writing in the Sequences to know what they feel about translations. I'm not a fan of the copyright system in general, but I still would consider an utter lack of respect to someone to translate his work against his will.
- How can the team be coordinated, and how can newcomers to Less Wrong know that efforts are underway and requiring help ? Once again, I think it would be nice to have some page (maybe on the wiki ?) with the undergoing efforts, who is participating in which, how to contact them, and have a link to it from the Welcome to Less Wrong page.
Any opinion on those suggestions ? Any volunteer for joining some of the teams ? Anyone from "the staff" who could answer about the legal issues, and about the opportunity of including those pointers in the "Welcome to Less Wrong" page ?
Language, intelligence, rationality
Rationality requires intelligence, and the kind of intelligence that we use (for communication, progress, FAI, etc.) runs on language.
It seems that the place we should start is optimizing language for intelligence and rationality. One of SIAI's proposals includes using Lojban to interface between humans and an FAI. And of course, I should hope the programming language used to build a FAI would be "rational". But it would seem to me that the human-generated priors, correct epistemic rationality, decision theory, metaethics, etc. all depend on using a language that sufficiently rigorously maps to our territory.
Are "naturally evolved" languages such as English sufficient, with EY-style taboos and neologisms? Or are they sick to the core?
Please forgive and point me towards previous discussion or sequences about this topic.
Convincing ET of our rationality
Allow me to propose a thought experiment. Suppose you, and you alone, were to make first contact with an alien species. Since your survival and the survival of the entire human race may depend on the extraterrestrials recognizing you as a member of a rational species, how would you convey your knowledge of mathematics, logic, and the scientific method to them using only your personal knowledge and whatever tools you might reasonably have on your person on an average day?
When I thought of this question, the two methods that immediately came to mind were the Pythagorean Theorem and prime number sequences. For instance, I could draw a rough right triangle and label one side with three dots, the other with four, and the hypotenuse with five. However, I realized that these are fairly primitive maths. After all, the ancient Greeks knew of them, and yet had no concept of the scientific method. Would these likely be sufficient, and if not what would be? Could you make a rough sketch of the first few atoms on the periodic table or other such universal phenomena so that it would be generally recognizable? Could you convey a proof of rationality in a manner that even aliens who cannot hear human vocalizations, or see in a completely different part of the EM spectrum? Is it even in principle possible to express rationality without a common linguistic grounding?
In other words, what is the most rational thought you could convey without the benefit of common language, culture, psychology, or biology, and how would you do it?
Bonus point: Could you convey Bayes' theorem to said ET?
Starting point for calculating inferential distance?
One of the shiniest ideas I picked up from LW is inferential distance. I say "shiny" because the term, so far as I'm aware, has no clear mathematical or pragmatic definition, no substantive use in peer reviewed science, but was novel to me and appeared to make a lot of stuff about the world suddenly make sense. In my head it is marked as "super neat... but possibly a convenient falsehood". I ran across something yesterday that struck me a beautifully succinct and helpful towards resolving the epistemic status of the concept of "inferential distance".
[LINK] Creationism = High Carb? Or, The Devil Does Atkins
Based on the community's continuing interests in diet and religion, I'd like to point out this blog post by the coauthor of Protein Power, Michael Eades, wherein he suggests that biblical literalism tends toward a low-fat approach to nutrition over a low-carb philosophy, by essentially throwing out a bunch of evidence on the matter:
Why, you might ask, is this scientist so obdurate in the face of all the evidence that’s out there? Perhaps because much of the evidence isn’t in accord with his religious beliefs. I try never to mention a person’s religious faith, but when it impacts his scientific thinking it at least needs to be made known. Unless he’s changed his thinking recently, Dr. Eckel apparently is one of the few academic scientists who are literal interpreters of the bible. I assume this because Dr. Eckel serves on the technical advisory board of the Institution for Creation Research, an organization that believes that not only is the earth only a few thousand years old , but that the entire universe in only a few thousand years old. And they believe that man was basically hand formed by God on the sixth day of creation. And Dr. Eckel’s own writings on the subject appear to confirm his beliefs
[.....]
Of all the evidence that exists, I think the evolutionary/natural selection data and the anthropological data are the most compelling because they provide the largest amount of evidence over the longest time. To Dr. Eckel, however, these data aren’t applicable because in his worldview prehistoric man didn’t exist and therefore wasn’t available to be molded by the forces of natural selection. I haven’t a clue as to what he thinks the fossil remains of early humans really were or where they came from. Perhaps he believes – as I once had it explained to me by a religious fundamentalist – these fossilized remains of dinosaurs, extinct ancient birds and mammals and prehistoric man were carefully buried by the devil to snare the unwary and the unbeliever. If this is the case, I guess I’ll have to consider myself snared.
In Dr. Eckel’s view, man was created post agriculturally. In fact, in his view, there was never an pre-agricultural era, so how could man have failed to adapt to agriculture?
While there's a clear persuasive agenda here and I won't present a full analysis of the situation, Eades also mentions biasing use of language earlier in the article. In particular, beware applause lights and confirmation bias in evaluating.
A writer describes gradually losing language
A writer's memoir of a brain tumor slowly destroying his ability to use language
When I came to read this passage "…floating and flailing weightlessly.…" I said the word "weightlessly" as "walterkly". It took quite a bit of effort to be fully sure that this was a mistake; and more effort and repeating to grasp what exactly this nonsense word was, to establish its sound – I had to construct it phoneme by phoneme – clearly enough to write it down. And it seems that the reading eye, darting backwards and forwards, was plucking letters from the whole vicinity, and mixing them up, having lost its usual ability to sort them.What the whole thing emphasises, of course, is how what we call self-command is really a matter of having reliable automatic mechanisms, unthinking habits or instincts.
Proposal for a structured agreement tool
I hope this is a good place for this - comments/suggestions welcome - offers of collaboration more than welcome!
I envisage a kind of structured wiki, centred around the creation of propositions, which can be linked to allow communities of interest to rapidly come to fairly sophisticated levels of mutual understanding; the aim being to foster the development of strong groups with confidence in shared, conscious positions. This should allow significant confidence in collaboration.
Some aspects, in no particular order;
- Propositions are made by users, and are editable by users - as in a wiki
- Each proposition could be templated - the inspiration for the template being the form adopted by Chris. Alexander et al in 'A Pattern Language', namely;
- TITLE (referenced)(confidence level)
- picture
- context - including links to other propositions within whose sphere this one might operate
- STATEMENT OF PROBLEM/PURPOSE OF PROPOSITION
- Discussion
- CONCLUSION - couched in parametric/generic/process based terms
- links to other propositions for which this proposition is the context
- Some mechanism for users to make public their degree of acceptance of each proposition
- Some mechanism for construction by individuals/groups of networks of propositions specific to particular users/groups (in other words, the links referred to in 3. and 7. above might be different for different users/groups) These networks can work like Pattern Languages that address particular fields / ethical approaches / political or philosophical positions / projects
- Some mechanism for assignment by users/groups of tiered structure to proposition networks (to allow for distinctions to be made between fundamental, large scale propositions and more detailed, peripheral ones)
- Some mechanism for individual users to form associations with other users/established groups who are subscribing to the same propositions
- Some mechanism for community voting/karma to promote individuals to assume stewardship of groups
Enough of these for now. Some imagined interactions might be more helpful;
- I stumble across the site (as I stumbled across LessWrong), and browse proposition titles. I come across one called 'Other people are real, just like me'. It contains some version of the argument for accepting that other humans are to be assumed to have roughly the same motivations, needs et al, as me, and the suggestion that this is a useful founding block for a rational morality. I decide to subscribe, fairly strongly. I am offered a tailored selection of related propositions, as identified by the groups that have included this proposition in their networks (without identification of said groups, I rather think) - I investigate these, and at some point, the system feels that my developing profile is beginning to match that of some group or groups - and offers me the chance to look at their 'mission statement' pages. I decide to come back another day and look at other propositions included in these groups' networks, before going any further. I decline to have my profile made public, so that the groups don't contact me.
- I come across some half-baked, but interesting proposition. As a registered user, but not the originator of the proposition, I have some choices; I can comment on the proposition, hoping to engage in dialogue with the proposer that could be fruitful, or I can 'clone' (or 'fork') the proposition, and seek to improve it myself. Ultimately, the interest of other users will determine the influence and relevance of the proposition.
-
I am a fundamentalist christian (!). I come across the site, and am appalled at its secular, materialist tone. I make a new proposition; 'The Bible is revealed truth, in all its glory' (or some such twaddle. Of course, I omit to specify which edition, and don't even consider the option of a language other than english - but hey, what do you expect?). Within days, I have assembled a wonderful active group of woolly minded people happily discussing the capacity of Noah's Ark, or whatever. The point here is that the platform is just that - a platform. Human community is a Good Thing.
- I am pushed upward by the group I am part of to some sort of moderator role. The system shows various other groups who agree more or less strongly with most of the propositions our group deems fundamental. I contact my opposite number in one of those, and we together make a new proposition which we believe could be a vehicle for discussions that could lead to a merger.
- I wish to write a business plan that is not a pile of dead tree gathering dust 6 weeks after it was presented to the board. I attempt to set out the aims of the business as fundamental propositions, and advertise this network to my colleagues, who suggest refinements. On this basis, we work up a description of the important policies and 'business rules' which define the enterprise. These remain accessible and editable , so that they can evolve along with the business.
- I am considering an open-source project. I set out the fundamental aims and characteristics of the tool I am proposing, and link them together. The system allows me to set myself up as a group. I sit back and wait for others to comment. Based on these comments, the propositions are refined, others added, relationships built with potential collaborators. At some point, we form a group, and the project gets under way. Throughout its life, the propositions are continually refined and added to. The propositions are a useful form of marketing, and save us a great deal of bother talking to people who want to know what/why/how.
Enough... Point 6 is almost recursive.......
There is more discursive (and older) material, here.
Thanks for reading, and please do comment.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)