Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Use and misuse of models: case study

8 Stuart_Armstrong 27 April 2017 02:36PM

Some time ago, I discovered a post comparing basic income and basic job ideas. This sought to analyse the costs of paying everyone a guaranteed income versus providing them with a basic job with that income. The author spelt out his assumptions and put together a two models with a few components (including some whose values were drawn from various probability distributions). Then he ran a Monte Carlo simulation to get a distribution of costs for either policy.

Normally I should be very much in favour of this approach. It spells out the assumptions, it uses models, it decomposes the problem, it has stochastic uncertainty... Everything seems ideal. To top it off, the author concluded with a challenge aiming at improving reasoning around this subject:

How to Disagree: Write Some Code

This is a common theme in my writing. If you are reading my blog you are likely to be a coder. So shut the fuck up and write some fucking code. (Of course, once the code is written, please post it in the comments or on github.)

I've laid out my reasoning in clear, straightforward, and executable form. Here it is again. My conclusions are simply the logical result of my assumptions plus basic math - if I'm wrong, either Python is computing the wrong answer, I got really unlucky in all 32,768 simulation runs, or you one of my assumptions is wrong.

My assumption being wrong is the most likely possibility. Luckily, this is a problem that is solvable via code.

And yet... I found something very unsatisfying. And it took me some time to figure out why. It's not that these models are helpful, or that they're misleading. It's that they're both simultaneously.

To explain, consider the result of the Monte Carlo simulations. Here are the outputs (I added the red lines; we'll get to them soon):

The author concluded from these outputs that a basic job was much more efficient - less costly - than a basic income (roughly 1 trillion cost versus 3.4 trillion US dollars). He changed a few assumptions to test whether the result held up:

For example, maybe I'm overestimating the work disincentive for Basic Income and grossly underestimating the administrative overhead of the Basic Job. Lets assume both of these are true. Then what?

The author then found similar results, with some slight shifting of the probability masses.

 

The problem: what really determined the result

So what's wrong with this approach? It turns out that most of the variables in the models have little explanatory power. For the top red line, I just multiplied the US population by the basic income. The curve is slightly above it, because it includes such things as administrative costs. The basic job situation was slightly more complicated, as it includes a disabled population that gets the basic income without working, and a estimate for the added value that the jobs would provide. So the bottom red line is (disabled population)x(basic income) + (unemployed population)x(basic income) - (unemployed population)x(median added value of jobs). The distribution is wider than for basic income, as the added value of the jobs is a stochastic variable.

But, anyway, the contribution of the other variables were very minor. So the reduced cost of basic jobs versus basic income is essentially a consequence of the trivial fact that it's more expensive to pay everyone an income, than to only pay some people and then put them to work at something of non-zero value.

 

Trees and forests

So were the complicated extra variables and Monte Carlo runs for nothing? Not completely - they showed that the extra variables were indeed of little importance, and unlikely to change the results much. But nevertheless, the whole approach has one big, glaring flaw: it does not account for the extra value for individuals of having a basic income versus a basic job.

And the challenge - "write some fucking code" - obscures this. The forest of extra variables and the thousands of runs hides the fact that there is a fundamental assumption missing. And pointing this out is enough to change the result, without even needing to write code. Note this doesn't mean the result is wrong: some might even argue that people are better off with a job than with the income (builds pride in one's work, etc...). But that needs to be addressed.

So Chris Stucchio's careful work does show one result - most reasonable assumptions do not change the fact that basic income is more expensive than basic job. And to disagree with that, you do indeed need to write some fucking code. But the stronger result - that basic job is better than basic income - is not established by this post. A model can be well designed, thorough, filled with good uncertainties, and still miss the mark. You don't always have to enter into the weeds of the model's assumptions in order to criticise it.

Scenario analysis: a parody

4 Stuart_Armstrong 27 April 2017 03:21PM

Based on a idea from Nick Bostrom.

Suit A: "Welcome to our futurology meeting extravaganza, where we are going to do a complete analysis of the future using... drumroll... Scenario analysis!"

All: "All hail mighty scenario analysis!"

Suit A: "So, what are the big risks in the future?"

Suit B: "Global warming? I heard that's bad."

Suit A: "Indeed it is. What else do we have that's bad?"

Suit C: "How about obesity?"

Suit B: "I still think global warming is rather more important, it's getting hot and..."

Suit C: "Well, my grandfather was fat, and he suffered and died because..."

Suit A: "No need to argue, gentlewomen! We'll simply do a scenario analysis with both variables. So here we have the Sweaty Fat quadrant... Let me put it up on the board:"

Suit A: "Now let's give each scenario a thorough analysis!"

Suit D: "Isn't fat an insulant?"

Suit A: "That's the kind of incisive commentary we need!"

...

...

Much later:

Suit C: "So we have an ideal strategy: keep an eye on sweat pants purchase, and adjust our investment accordingly."

Suit D: "What about our social responsibilities?"

Suit A: "Good point."

Suit B: "Well, then we can track the size of suits and ice cream consumption and adjust health spending and gas subsidies in function of these."

Suit A: "Well, I think we've done a fabulous job today; really. No-one could have done a better job predicting than us. And it's all thanks to... Scenario analysis!"

All: "All hail!"

 

(very tangentially connected to the problem of models that are over-precise in narrow areas)

Meetup : Slatestar Codex Sao Paulo

1 leohmarruda 27 April 2017 12:48PM

Discussion article for the meetup : Slatestar Codex Sao Paulo

WHEN: 06 May 2017 02:00:00PM (-0300)

WHERE: Rua Antonio Carlos 452, São Paulo Brazil

Encontro mensal de racionalistas, novatos são bem vindos! Falaremos sobre Slatestar codex, teremos discussões diversas e jogos de tabuleiro no final.

Discussion article for the meetup : Slatestar Codex Sao Paulo

Introducing the Instrumental Rationality Sequence

15 lifelonglearner 26 April 2017 09:53PM

What is this project?

I am going to be writing a new sequence of articles on instrumental rationality. The end goal is to have a compiled ebook of all the essays, so the articles themselves are intended to be chapters in the finalized book. There will also be pictures.


I intend for the majority of the articles to be backed by somewhat rigorous research, similar in quality to Planning 101 (with perhaps a few less citations). Broadly speaking, the plan is to introduce a topic, summarize the research on it, give some models and mechanisms, and finish off with some techniques to leverage the models.


The rest of the sequence will be interspersed with general essays on dealing with these concepts, similar to In Defense of the Obvious. Lastly, there will be a few experimental essays on my attempt to synthesize existing models into useful-but-likely-wrong models of my own, like Attractor Theory.


I will likely also recycle / cannibalize some of my older writings for this new project, but I obviously won’t post the repeated material here again as new stuff.



What topics will I cover?

Here is a broad overview of the three main topics I hope to go over:


(Ordering is not set.)


Overconfidence in Planning: I’ll be stealing stuff from Planning 101 and rewrite a bit for clarity, so not much will be changed. I’ll likely add more on the actual models of how overconfidence creeps into our plans.


Motivation: I’ll try to go over procrastination, akrasia, and behavioral economics (hyperbolic discounting, decision instability, precommitment, etc.)


Habituation: This will try to cover what habits are, conditioning, incentives, and ways to take the above areas and habituate them, i.e. actually putting instrumental rationality techniques into practice.


Other areas I may want to cover:

Assorted Object-Level Things: The Boring Advice Repository has a whole bunch of assorted ways to improve life that I think might be useful to reiterate in some fashion.


Aversions and Ugh Fields: I don’t know too much about these things from a domain knowledge perspective, but it’s my impression that being able to debug these sorts of internal sticky situations is a very powerful skill. If I were to write this section, I’d try to focus on Focusing and some assorted S1/S2 communication things. And maybe also epistemics.


Ultimately, the point here isn’t to offer polished rationality techniques people can immediately apply, but rather to give people an overview of the relevant fields with enough techniques that they get the hang of what it means to start making their own rationality.



Why am I doing this?

Niche Role: On LessWrong, there currently doesn’t appear to be a good in-depth series on instrumental rationality. Rationality: From AI to Zombies seems very strong for giving people a worldview that enables things like deeper analysis, but it leans very much into the epistemic side of things.


It’s my opinion that, aside from perhaps Nate Soares’s series on Replacing Guilt (which I would be somewhat hesitant to recommend to everyone), there is no in-depth repository/sequence that ties together these ideas of motivation, planning, procrastination, etc.

 

Granted, there have been many excellent posts here on several areas, but they've been fairly directed. Luke's stuff on beating procrastination, for example, is fantastic. I'm aiming for a broader overview that hits the current models and research on different things.


I think this means that creating this sequence could add a lot of value, especially to people trying to create their own techniques.


Open-Sourcing Rationality: It’s clear that work is being done on furthering rationality by groups like Leverage and CFAR. However, for various reasons, the work they do is not always available to the public. I’d like to give people who are interested but unable to directly work with these organization something they can use to jump start their own investigations.


I’d like this to become a similar Schelling Point that we could direct people to if they want to get started.


I don’t meant to imply that what I’ll produce is the same caliber, but I do think it makes sense to have some sort of pipeline to get rationalists up to speed with the areas that (in my mind) tie into figuring out instrumental rationality. When I first began looking into this field, there was a lot of information that was scattered in many places.

 

I’d like to create something cohesive that people can point to when newcomers want to get started with instrumental rationality that similarly gives them a high level overview of the many tools at their disposal.


Revitalizing LessWrong: It’s my impression that independent essays on instrumental rationality have slowed over the years. (But also, as I mentioned above, this doesn’t mean stuff hasn’t happened. CFAR’s been hard at work iterating their own techniques, for example.) As LW 2.0 is being talked about, this seems like an opportune time to provide some new content and help with our reorientation towards LW becoming once again a discussion hub for rationality.



Where does LW fit in?

Crowd-sourcing Content: I fully expect that many other people will have fantastic ideas that they want to contribute. I think that’s a good idea. Given some basic things like formatting / roughly consistent writing style throughout, I think it’d be great if other potential writers see this post as an invitation to start thinking about things they’d like to write / research about instrumental rationality.


Feedback: I’ll be doing all this writing on a public Google Doc with posts that feature chapters once they’re done, so hopefully there’s ample room to improve and take in constructive criticism. Feedback on LW is often high-quality, and I expect that to definitely improve what I will be writing.


Other Help: I probably can’t come through every single research paper out there, so if you see relevant information I didn’t or want to help with the research process, let me know! Likewise, if you think there are other cool ways you can contribute, feel free to either send me a PM or leave a comment below.



Why am I the best person to do this?

I’m probably not the best person to be doing this project, obviously.


But, as a student, I have a lot of time on my hands, and time appears to be a major limiting reactant in this whole process.

 

Additionally, I’ve been somewhat involved with CFAR, so I have some mental models about their flavor of instrumental rationality; I hope this translates into meaning I'm writing about stuff that isn't just a direct rehash of their workshop content.


Lastly, I’m very excited about this project, so you can expect me to put in about 10,000 words (~40 pages) before I take some minor breaks to reset. My short-term goals (for the next month) will be on note-taking and finding research for habits, specifically, and outlining more of the sequence.

Background Reading: The Real Hufflepuff Sequence Was The Posts We Made Along The Way

12 Raemon 26 April 2017 06:15PM

This is the fourth post of the Project Hufflepuff sequence. Previous posts:


Epistemic Status: Tries to get away with making nuanced points about social reality by using cute graphics of geometric objects. All models are wrong. Some models are useful. 

Traditionally, when nerds try to understand social systems and fix the obvious problems in them, they end up looking something like this:

Social dynamics is hard to understand with your system 2 (i.e. deliberative/logical) brain. There's a lot of subtle nuances going on, and typically, nerds tend to see the obvious stuff, maybe go one or two levels deeper than the obvious stuff, and miss that it's in fact 4+ levels deep and it's happening in realtime faster than you can deliberate. Human brains are pretty good (most of the time) at responding to the nuances intuitively. But in the rationality community, we've self-selected for a lot of people who:

  1. Don't really trust things that they can't understand fully with their system 2 brain. 
  2. Tend not to be as naturally skilled at intuitive mainstream social styles. 
  3. Are trying to accomplish things that mainstream social interactions aren't designed to accomplish (i.e. thinking deeply and clearly on a regular basis).
This post is an overview of essays that rationalist-types have written over the past several years, that I think add up to a "secret sequence" exploring why social dynamics are hard, and why they are important to get right. This may useful both to understand some previous attempts by the rationality community to change social dynamics on purpose, as well as to current endeavors to improve things.

(Note: I occasionally have words in [brackets], where I think original jargon was pointing in a misleading direction and I think it's worth changing)

To start with, a word of caution:

Armchair sociolology can be harmful - Ozy's post is pertinent - most essays below fall into the category of "armchair sociology", and attempts by nerds to understand and articulate social-dynamics that they aren't actually that good at. Several times when an outsider has looked in at rationalist attempts to understood human interaction they've said "Oh my god, this is the blind leading the blind", and often that seemed to me like a fair assessment.

I think all the essays that follow are useful, and are pointing at something real. But taken individually, they're kinda like the blind men groping at the elephant, each coming away with the distinct impression an elephant is like a snake, tree, a boulder, depending on which aspect they're looking at.

[Fake Edit: Ozy informs me that they were specifically warning against amateur sociology and not psychology. I think the idea still roughly applies]

Part 1. Cultural Assumptions of Trust

Guess [Infer Culture], Ask Culture, and Tell [Reveal] Culture (Malcolm Ocean)

 

Different people have different ways of articulating their needs and asking for help. Different ways of asking require different assumptions of trust. If people are bringing different expectations of trust into an interaction, they may feel that that trust is being violated, which can seem rude, passive aggressive or oppressive.

 

I'm listing this article, instead of numerous others about Ask/Guess/Tell, because I think: a) Malcolm does a good job of explaining how all the cultures work, and b) I think his presentation of Reveal culture is a good, clearer upgrade for Brienne's Tell culture, and I'm a bit sad it didn't seem to make it into the zeitgeist yet. 

I also like the suggestion to call Guess Culture "Infer Culture" (implying a bit more about what skills the culture actually emphasizes).

Guess Culture Screens for Trying to Cooperate (Ben Hoffman)

Rationality folk (and more generally, nerds), tend to prefer explicit communication over implicit, and generally see Guess culture as strictly inferior to Ask culture once you've learned to assert yourself. 

But there is something Guess culture does which Ask culture doesn't, which is give you evidence of how much people understand you and are trying to cooperate. Guess cultures filters for people who have either invested effort into understanding your culture overall, or people who are good at inferring your own wants. 

Sharp Culture and Soft Culture (Sam Rosen)

[WARNING: It turned out lots of people thought this meant something different than what I thought it meant. Some people thought it meant soft culture didn't involve giving people feedback or criticism at all. I don't think Soft/Sharp are totally-naturally clusters in the first place, and the distinction I'm interested in (as applies to rationality-culture), is how you give feedback.

(i.e. "Dude, your art sucks. It has no perspective." vs "oh, cool. Nice colors. For the next drawing, you might try incorporating perspective", as a simplified example)]

Somewhat orthogonal to Infer/Ask/Reveal culture is "Soft" vs "Sharp" culture. Sharp culture tends to have more biting humor, ribbing each other, and criticism. Soft culture tends to value kindness and social harmony more. Sam says that Sharp culture "values honesty more." Robby Bensinger counters in the comments: "My own experience is that sharp culture makes it more OK to be open about certain things (e.g., anger, disgust, power disparities, disagreements), but less OK to be open about other things (e.g., weakness, pain, fear, loneliness, things that are true but not funny or provocative or badass)."

Handshakes, Hi, and What's New: What's Going on With Small Talk?  (Ben Hoffman)

Small talk often sounds nonsensical to literally-minded people, but it serves a fairly important function: giving people a structured path to figure out how much time/sympathy/interest they want to give each other. And even when the answer is "not much", it still is, significantly, nonzero - you regard each other as persons, not faceless strangers.

Personhood [Social Interfaces?]  (Kevin Simler)

This essays gets a lot of mixed reactions, much of which I think has to do with its use of the word "Person." The essay is aimed at explaining how people end up treating each other as persons or nonpersons, without making any kind of judgement about it. This includes noting some things human tend to do that you might consider horrible.

Like many grand theories, I think it overstates it's case and ignores some places where the explanation breaks down, but I think it points at a useful concept which is summarized by this adorable graphic:

The essay uses the word "personhood". In the original context, this was useful: it gets at why cultures develop, why it matters whether you're able to demonstrate reliably, trust, etc. It helps explain outgroups and xenophobia: outsiders do not share your social norms, so you can't reliably interact with them, and it's easier to think of them as non-people than try to figure out how to have positive interactions.

But what I'm most interested in is "how can we use this to make it easier for groups with different norms to interact with each other"? And for that, I think using the word "personhood" makes it way more likely to veer into judging each other for having different preferences and communication styles.

What makes a person is... arbitrary, but not fully arbitrary. 

Rationalist culture tends to attract people who prefer a particular style of “social interface”, often favoring explicit communication and discussing ideas in extreme detail. There's a lot of value to those things, but they have some problems:

a) this social interface does NOT mesh well with the rest of world (this is a problem if you have any goals that involve the rest of the world)

b) this goal does not uniformly mesh well with all people interested in and valuable to the rationality community.

I don't actually think it's possible to develop a set of assumptions that fit everyone's needs. But I do think it's possible to develop better tools for navigating different social contexts. I think it may be possible both to tweak sets-of-norms so that they mesh better together, or at least when they bump into each other, there's greater awareness of what's happening and people's default response is "oh, we seem to have different preferences, let's figure out how 

Maybe we can end up with something that looks kinda like this:

Against Being Against or For Tell Culture  (Brienne Yudkowsky)

Having said a bunch of things about different cultural interfaces, I think this post by Brienne is really important, and highlights the end goal of all of this.

"Cultures" are a crutch. They are there to help you get your bearings. They're better than nothing. But they are not a substitute for actually having the skills needed to navigate arbitrary social situations as they come up so you can achieve whatever it is you want to achieve. 

To master communication, you can't just be like, "I prefer Tell Culture, which is better than Guess Culture, so my disabilities in Guess Culture are therefore justified." Justified shmustified, you're still missing an arm.

My advice to you - my request of you, even - if you find yourself fueling these debates [about which culture is better], is to (for the love of god) move on. If you've already applied cognitive first aid, you've created an affordance for further advancement. Using even more tourniquettes doesn't help.

Part 2. Game Theory, Recursion and Trust

(or, "Social dynamics are really complicated, you are not getting away with the things you think you are getting away with, stop trying to be clever, manipulative, act-utilitarian or naive-consequentialist without actually understanding what is going on")

Grokking Newcomb's Problem and Deserving Trust (Andrew Critch)

Critch argues why it is not just "morally wrong", but an intellectual mistake, to violate someone’s trust (even when you don’t expect any repercussions in the future).

When someone decides whether to trust you (say, giving you a huge opportunity), on the expectation that you’ll refrain from exploiting them, they’ve already run a low-grade simulation of you in their imagination. And the thing is that you don’t know whether you’re in a simulation or not when you make the decision whether to repay them. 

Some people argue “but I can tell that I’m a conscious being, and they aren’t a literal super-intelligent AI, they’re just a human. They can’t possibly be simulating me in this high fidelity. I must be real.” This is true. But their simulation of you is not based on your thoughts, it’s based on your actions. It’s really hard to fake. 

One way to think about it, not expounded on in the article: Yes, if you pause to think about it you can notice that you’re conscious and probably not being simulated in their imagination. But by the time you notice that, it’s too late. People build up models of each other all the time, based on very subtle cues such as how fast you respond to something. Conscious you knows that you’re conscious. But their decision of whether to trust you was based off the half-second it took for unconscious you to reply to questions like “Hey, do you think you handle  Project X while I’m away?”

The best way to convince people you’re trustworthy is to actually be trustworthy.

You May Not Believe In Guess[Infer] Culture But It Believes In You (Scott Alexander)

This is short enough to just include the whole thing:

Consider an "ask culture" where employees consider themselves totally allowed to say "no" without repercussions. The boss would prefer people work unpaid overtime so ey gets more work done without having to pay anything, so ey asks everyone. Most people say no, because they hate unpaid overtime. The only people who agree will be those who really love the company or their job - they end up looking really good. More and more workers realize the value of lying and agreeing to work unpaid overtime so the boss thinks they really love the company. Eventually, the few workers who continue refusing look really bad, like they're the only ones who aren't team players, and they grudgingly accept.

Only now the boss notices that the employees hate their jobs and hate the boss. The boss decides to only ask employees if they will work unpaid overtime when it's absolutely necessary. The ask culture has become a guess culture.

How this applies to friendship is left as an exercise for the reader.

The Social Substrate (Lahwran)

A fairly in depth look into how common knowledge, signaling, newcomb-like problems and recursive modeling of each other interact to produce "regular social interaction."

I think there's a lot of interesting stuff here - I'm not sure if it's exactly accurate but it points in directions that seem useful. But I actually think the most important takeaway is the warning at the beginning:

WARNING: An easy instinct, on learning these things, is to try to become more complicated yourself, to deal with the complicated territory. However, my primary conclusion is "simplify, simplify, simplify": try to make fewer decisions that depend on other people's state of mind. You can see more about why and how in the posts in the "Related" section, at the bottom.

When you're trying to make decisions about people, you're reading a lot of subtle cues off them to get a sense of how you feel about that. When you [generic person you, not necessarily you in particular] can tell someone is making complex decisions based on game theory and trying to model all of this explicitly, it a) often comes across as a bit off, and b) even if it doesn't, you still have to invest a lot of cognitive resources figuring out how they are modeling things and whether they are actually doing a good job or missing key insights or subtle cues. The result can be draining, and it can output a general response of "ugh, something about this feels untrustworthy."

Whereas when people are able to cache this knowledge down into a system-1 level, you're able to execute a simpler algorithm that looks more like "just try to be a good trustworthy person", that people can easily read off your facial expression, and which reduces overall cognitive burden.

System 1 and System 2 Morality  (Sophie Grouchy)

There’s some confusion over what “moral” means, because there’s two kinds of morality: 

 - System 1 morality is noticing-in-realtime when people need help, or when you’re being an asshole, and then doing something about it. 

 - System 2 morality is when you have a complex problem and a lot of time to think about it. 

System 1 moralists will pay back Parfit’s Hitchhiker because doing otherwise would be being a jerk. System 2 moralists invent Timeless [Functional?] decision theory. You want a lot of people with System 2 morality in the world, trying to fix complex problems. You want people with System 1 morality in your social circle.

The person who wrote this post eventually left the rationality community, in part due to frustration due to people constantly violating small boundaries that seemed pretty obvious (things in the vein of “if you’re going to be 2 hours late, text me so I don’t have to sit around waiting for you.”)

Final Remarks

I want to reiterate - all models are wrong. Some models are useful. The most important takeaway from this is not that any particular one of these perspectives is true, but that social dynamics has a lot of stuff going on that is more complicated than you're naively imagining, and that this stuff is important enough to put the time into getting right.

[Stub] Extortion and Pascal's wager

2 Stuart_Armstrong 26 April 2017 01:07PM

The premises of Pascal's wager are normally presented as abstract facts about the universe - there happens to (maybe) be a god, who happens to have set up the afterlife for the suffering of unbelievers.

But, assuming we ever manage to distinguish trade from extortion, this seems a situation of classical extortion. So if god follows a timeless decision theory - and what other kind of decision theory would it follow? - the correct answer would seem to be to reject the whole deal out of hand, even if you assume god exists.

Or, in other words, respond to a god that offers you heaven, but ignore one that threatens you with hell.

Actors and scribes, words and deeds

5 Benquo 26 April 2017 05:12AM

[Epistemic status: exploratory exercise in naming and concept-formation.]

Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings.

I previously described this as a distinction between promise-keeping "Quakers" and impulsive "Actors," but I think this missed a key distinction. There's "telling the truth," and then there's a more specific thing that's more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts. This leaves out some other types – I'm not exactly sure how it relates to engineers and diplomats, for instance – but I think I have the right names for these two things now.

Summary

Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That's all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?

I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by "words have meanings." For instance, people who try to use promises like magic spells to bind their future behavior don't seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.

Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.

Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive.

Enactive language

Some uses of words are enactive: ways to build or reveal momentum. Others denote the position of things on your world-map.

In the denotative framing, words largely denote concepts that refer to specific classes of objects, events, or attributes in the world, and should be parsed as such. The meaning of a sentence is mainly decomposable into the meanings of its parts and their relations to each other. Words have distinct meanings that can be composed together in structures to communicate complex and nonobvious messages, or just uses and connotations.

In the enactive mode, the function of speech is to produce some action or disposition in your listener, who may be yourself. Ideas are primarily associative, reminding you of the perceptions with which the speech-act is associated. Other uses of language are structural. When you speak in this mode, it’s to describe models - relationships between concepts, which refer to classes of objects in the world.

When I wrote about admonitions as performance-enhancing speech, I gave the example of someone being encouraged by their workout buddies:

Recently, at the gym, I overheard some group of exercise buddies admonishing their buddy on some machine to keep going with each rep. My first thought was, “why are they tormenting their friend? Why can’t they just leave him alone? Exercise is hard enough without trying to parse social interactions at the same time.”

And then I realized - they’re doing it because, for them, it works. It's easier for them to do the workout if someone is telling them, “Keep going! Push it! One more!”

In the same post, I quoted Wittgenstein’s thought experiment of a language where words are only ever used as commands, with a corresponding action, never to refer to an object. Wittgenstein gives the example of a language used for nothing but military orders, and then elaborates on a hypothetical language used strictly for work orders. For instance, a foreman might use the utterance “Slab!” to direct a worker to fetch a slab of rock. I summarized the situation thus:

When I hear “slab”, my mind interprets this by imagining the object. A native speaker of Wittgenstein’s command language, when hearing the utterance “Slab!”, might - merely as the act of interpreting the word - feel a sense of readiness to go fetch a stone slab.

Wittgenstein’s listener might think of the slab itself, but only as a secondary operation in the process of executing the command. Likewise, I might, after thinking of the object, then infer that someone wants me to do something with the slab. But that requires an additional operation: modeling the speaker as an agent and using Gricean implicature to infer their intentions. The word has different cognitive content or implications for me, than for the speaker of Wittgenstein’s command language.

Military drills are also often about disintermediating between a command and action. Soldiers learn that when you receive an order, you just do the thing. This can lead to much more decisive and coordinated action in otherwise confusing situations – a familiar stimulus can lead to a regular response.

When someone gives you driving directions by telling you what you'll observe, and what to do once you make that observation, they're trying to encode a series of observation-action linkages in you.

This sort of linkage can happen to nonverbal animals too. Operant conditioning of animals gets around most animals' difficulty understanding spoken instructions, by associating a standardized reward indicator with the desired action. Often, if you want to train a comparatively complex action like pigeons playing pong, you'll need to train them one step at a time, gradually chaining the steps together, initially rewarding much simpler behaviors that will eventually compose into the desired complex behavior.

Crucially, the communication is never about the composition itself, just the components to be composed. Indeed, it’s not about anything, from the perspective of the animal being trained. This is similar to an old-fashioned army reliant on drill, in which, during battle, soldiers are told the next action they are to take, not told about overall structure of their strategy. They are told to, not told about.

Indeterminacy of translation

It’s conceivable that having what appears to be a language in common does not protect against such differences in interpretation. Quine also points to indeterminacy of translation and thus of explicable meaning with his "gavagai" example. As Wikipedia summarizes it:

Indeterminacy of reference refers to the interpretation of words or phrases in isolation, and Quine's thesis is that no unique interpretation is possible, because a 'radical interpreter' has no way of telling which of many possible meanings the speaker has in mind. Quine uses the example of the word "gavagai" uttered by a native speaker of the unknown language Arunta upon seeing a rabbit. A speaker of English could do what seems natural and translate this as "Lo, a rabbit." But other translations would be compatible with all the evidence he has: "Lo, food"; "Let's go hunting"; "There will be a storm tonight" (these natives may be superstitious); "Lo, a momentary rabbit-stage"; "Lo, an undetached rabbit-part." Some of these might become less likely – that is, become more unwieldy hypotheses – in the light of subsequent observation. Other translations can be ruled out only by querying the natives: An affirmative answer to "Is this the same gavagai as that earlier one?" rules out some possible translations. But these questions can only be asked once the linguist has mastered much of the natives' grammar and abstract vocabulary; that in turn can only be done on the basis of hypotheses derived from simpler, observation-connected bits of language; and those sentences, on their own, admit of multiple interpretations.

Everyone begins life as a tiny immigrant who does not know the local language, and has to make such inferences, or something like them. Thus, many of the difficulties in nailing down exactly what a word is doing in a foreign language have analogues in nailing down exactly what a word is doing for another speaker of one’s own language.

Mimesis, association, and structure

Not only do we all begin life as immigrants, but as immigrants with no native language to which we can analogize our adopted tongue. We learn language through mimesis. For small children, language is perhaps more like Wittgenstein's command language than my reference-language. It's a commonplace observation that children learn the utterance "No!" as an expression of will. In The Ways of Naysaying: No, Not, Nothing, and Nonbeing, Eva Brann provides a charming example:

Children acquire some words, some two-word phrases, and then no. […] They say excited no to everything and guilelessly contradict their naysaying in the action: "Do you want some of my jelly sandwich?" "No." Gets on my lap and takes it away from me. […] It is a documented observation that the particle no occurs very early in children's speech, sometimes in the second year, quite a while before sentences are negated by not.

First we learn language as an assertion of will, a way to command. Then, later, we learn how to use it to describe structural features of world-models. I strongly suspect that this involves some new, not entirely mimetic cognitive machinery kicking in, something qualitatively different: we start to think in terms of pointer-referent and concept-referent relations. In terms of logical structures, where "no" is not simply an assertion of negative affect, but inverts the meaning of whatever follows. Only after this do recursive clauses, conditionals, and negation of negation make any sense at all.

As long as we agree on something like rules of assembly for sentences, mimesis might mask a huge difference in how we think about things. It's instructive to look at how the current President of the United States uses language. He's talking to people who aren't bothering to track the structure of sentences. This makes him sound more "conversational" and, crucially, allows him to emphasize whichever words or phrases he wants, without burying them in a potentially hard-to-parse structure. As Katy Waldman of Slate says:

For some of us, Trump’s language is incendiary garbage. It’s not just that the ideas he wants to communicate are awful but that they come out as Saturnine gibberish or lewd smearing or racist gobbledygook. The man has never met a clause he couldn’t embellish forever and then promptly forget about. He uses adjectives as cudgels. You and I view his word casserole as not just incoherent but representative of the evil at his heart.

But it works. […]

Why? What’s the secret to Trump’s accidental brilliance? A few theories: simple component parts, weaponized unintelligibility, dark innuendo, and power signifiers.

[…] Trump tends to place the most viscerally resonant words at the end of his statements, allowing them to vibrate in our ears. For instance, unfurling his national security vision like a nativist pennant, Trump said:

But, Jimmy, the problem 
I mean, look, I’m for it.
But look, we have people coming into the country
that are looking to do tremendous harm….
Look what happened in Paris.
Look what happened in California,
with, you know, 14 people dead.
Other people are going to die,
they’re badly injuredwe have a real problem.

Ironically, because Trump relies so heavily on footnotes, false starts, and flights of association, and because his digressions rarely hook back up with the main thought, the emotional terms take on added power. They become rays of clarity in an incoherent verbal miasma. Think about that: If Trump were a more traditionally talented orator, if he just made more sense, the surface meaning of his phrases would likely overshadow the buried connotations of each individual word. As is, to listen to Trump fit language together is to swim in an eddy of confusion punctuated by sharp stabs of dread. Which happens to be exactly the sensation he wants to evoke in order to make us nervous enough to vote for him.

Of course, Waldman is being condescending and wrong here. This is not word salad, it's high context communication. But high context communication isn't what you use when you are thinking you might persuade someone who doesn't already agree with you, it's just a more efficient exercise in flag-waving. The reason why we don't see a complex structure here is because Trump is not trying to communicate this sort of novel content that structural language is required for. He's just saying "what everyone was already thinking."

But while Waldman picked a poor example, she's not wholly wrong. In some cases, the President of the United States seems to be impressionistically alluding to arguments or events his audience has already heard of – but his effective rhetorical use of insulting epithets like “Little Marco,” “Lying Ted Cruz,” and “Crooked Hillary,” fit very clearly into this schema. Instead of asking us to absorb facts about his opponents, incorporate them into coherent world-models, and then follow his argument for how we should judge them for their conduct, he used the simple expedient of putting a name next to a descriptor, repeatedly, to cause us to associate the connotations of those words. We weren't asked to think about anything. These were simply command words, designed to act directly on our feelings about the people he insulted.

We weren't asked to take his statements as factually accurate. It's enough that they're authentic.

This was persuasive to enough voters to make him President of the United States. This is not a straw man. This is real life. This is the world we live in.

You might object that the President of the United States is an unfair example, and that most people of any importance should be expected to be better and clearer thinkers than the leader of the free world. So, let's consider the case of some middling undergraduates taking an economics course.

Robin Hanson reports that he can get students to mimic an economic way of talking, but not to think like an economist:

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.
[...]
Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

This is a straightforward description of thinking that is formal but nonconceptual. Hanson's students have learnt some words, and rules for moving the words around and putting them together, but at no point did they connect the rules for moving around words with regular properties of things that the words point to. The words are the things. When Hanson stops feeding them the right keywords, and asks them questions that require them to understand the underlying structural features of reality that economics is supposed to describe, they come up empty.

Of course, it seems unlikely that many people can't think structurally at all. It seems to me like nearly everyone can think structurally about physical objects in their immediate environment. But it seems like when talking about abstractions, or the future, some people shift to a mental mode where words don't carry the same weight of reference.

Even for those of us who habitually think structurally, it would be surprising if the mimetic component to language ever totally went away. Plenty of times, I've started saying something, only to stop midway through realizing that I'm just repeating something I heard, not reporting on a feature of my model of the world.

Tendencies towards mimesis are hard to resist, and part of why I think it's so important to push back against falsehoods in any spaces that are meant to be accreting truth. Why even casual, accidental errors should be promptly corrected. Why I need an epistemic environment that's not constantly being polluted by adversarial processes.

And we can’t begin to figure out how to do this until it becomes common knowledge that not everyone is doing the same thing with words, that modeling the world is a legitimate and useful thing to do with them, and that not all communication is designed to be friendly to the people who assume it’s composed of words with meanings.

(Cross-posted on my personal blog.)

Stupid Questions May 2017

7 gilch 25 April 2017 08:28PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

Defining the normal computer control problem

3 whpearson 25 April 2017 11:49PM

There has been focus on controlling super intelligent artificial intelligence, however we currently can't even control our un-agenty computers without having to resort to formatting and other large scale interventions.

Solving the normal computer control problem might help us solve the super intelligence control problem or allow us to work towards safe intelligence augmentation.

continue reading »

I Updated the List of Rationalist Blogs on the Wiki

20 deluks917 25 April 2017 10:26AM

I recently updated the list of rationalist community blogs. The new page is here: https://wiki.lesswrong.com/wiki/List_of_Blogs

Improvements:

-Tons of (active) blogs have been added

-All dead links have been removed

-Blogs which are currently inactive but somewhat likely to be revived have been moved to an inactive section. I included the date of their last post. 

-Blogs which are officially closed or have not been updated in many years are now all in the "Gone but not forgetten" section

Downsides:

-Categorizing the blogs I added was hard, its unclear how well I did. By some standard most rationalist blogs should be in "general rationality" 

-The blog descriptions could be improved (both for the blog-listings I added and the pre-existing listings)

-I don't know the names of the authors of Several blogs I added. 

I am posting this here because I think the article is of general interest to rationalists. In addition the page could use some more polish and attention. I also think it might be interesting to think about improving the lesswrong wiki. Several pages could use an update. However this update took a considerable amount of time. So I understand why many wiki pages are not up to date. How can we make it easier and more rewarding to work on the wiki?

View more: Next