Writing Collaboratively

8 richard_reitz 18 June 2016 07:47PM

This is a summary of the customs for collaborative writing the team on the fanfiction In Fire Forged came to, after a fair amount of time and effort figuring things out. The purpose of this piece is to share our results, thereby saving anyone who wants to write collaboratively the cost of experimentation. Obviously, different writing projects will accomplish different things with different people, and will therefore be best served by different practices. Take this as a first approximation, to be revised by experience.

Google Docs

We tried a bunch of platforms for collaboration, and found Google Docs to best fit our needs.

  1. Create a Google Doc. Multi-installment affairs may consider creating a folder and make one doc per installment.
  2. Enable editing. Collaborators are not very helpful if they can't provide feedback.

    Google Docs allows authors to restrict the changes other people can make to "suggestions" and "comments" by switching to "suggesting" mode.



    In general, the author restricts collaborator permissions to comments and suggestions. How to control these permissions should be described in the "enable editing" link above.
  3. Distribute link to collaborators.

Once the collaborators have the link, they read through it, making the comments and suggestions they think of. Google Docs does a good job facilitating discussion of this feedback; utilize this!

Micro and Macro

We found it useful to distinguish between what we were saying and how we were saying it. We termed the former "macro" and the latter "micro". This allows authors to say things like "I'm mostly looking for micro suggestions, although I'd be interested in any glaring macro errors (anything untrue or major omissions)." This succinctly communicates that collaborators should mostly restrict themselves to suggesting changes to how the author is communicating, which usually consists of small edits concerning things like technical issues (typos, omitted words, grammar) and smoother communication (word choice, resolving ambiguities, sectioning).

This contrasts macro suggestions, which would include (in nonfiction) things like making sure factual claims were true, being sure to include all relevant information, and the perspective from a different field. (In fiction, macro suggestions would include things such as plot, characterization, chapter structure and consistency of the universe.)

In general, you want to address macro issues before micro issues, since micro improvements are lost to changes on the macro level.

Team Makeup

On the macro level, you want as many people as can bring novel, relevant viewpoints to the writing. Essentially, you're looking to exploit Linus's Law by having at least one collaborator who will naturally see every improvement that could be made.

I favor erring on the size of larger teams for a few reasons. The coordination cost of adding a member isn't very high. Improving things on the micro level really benefits from having lots of eyeballs scrutinize for improvements: it's entirely plausible that the tenth reader of some passage notices a way to reword it that the first nine missed.

My favorite reason for having more collaborators, however, is that it opens up the possibility of partial editing. One collaborator flags something they notice could be improved, even if they can't think of how. Then, another collaborator, who may not have noticed that something sounded awkward, may figure out how to rewrite it better. (It may sound implausible that someone who can figure out the improvement wouldn't notice something improvable in the first place, but it happened reasonably often.)

Spreading the micro over a lot of people also helps avoid illusions of transparency. If you only have one or two people revising, it's easy for them to spend so much time that they miss statements that don't mean what they think it means or are ambiguous, since they're so familiar with what they mean to mean. Spreading out the editing keeps everyone from becoming overfamiliar with the work. It also allows for holding editors in reverse, who give the work one last pass and read it as naively as the target audience.

Collaborator Benefits

Helping someone else write their piece is the single most effective technique I've used to powerlevel my writing. SICP:

The ability to visualize the consequences of the actions under consideration is crucial to becoming an expert programmer, just as it is in any synthetic, creative activity. In becoming an expert photographer, for example, one must learn how to look at a scene and know how dark each region will appear on a print for each possible choice of exposure and development conditions. Only then can one reason backward, planning framing, lighting, exposure, and development to obtain the desired effects. So it is with programming...

...and so it is with writing. There's an awkward period when you're first starting to write, where you've read enough that you have some idea of what better and worse writing looks like, but you haven't written enough to visualize the consequences of your writing. The author of In Fire Forged got there by writing and scrapping 140k words. I got there with a fraction of the effort by helping out on a team that allowed me to see the consequences of various actions without needing to write entire pieces. I also got to see and analyze and discuss the feedback from the other collaborators, which taught me things about better writing I didn't already know. Plus, gaining this experience had positive externalities, since the suggestions I made wound up in a final product, instead of going into the trash.

Collaborating also helps you learn about the topic of the piece more effectively than just reading it, via levels of processing. Merely reading about something is fairly shallow, leading to nondurable memory, whereas collaborating on something forces deeper processing, and thus more durable understanding. You can force yourself to process something on a deeper level as you read it to get the same effect, but collaborating, again, produces positive externalities.

(You should be processing deeply anyway. One collaborator on this piece, for instance, puts comments in the margins of pieces she reads. That said, collaborating has positive externalities.)

It's also fun and social; writing collaboratively has caused me to meet some of my favorite people and strengthened many personal relationships. As such, I suggest that, should you come across some piece that you take a liking to, but see how you could improve it, you offer to collaborate with them. Worst case, they're flattered and turn you down politely.

Two kinds of Expectations, *one* of which is helpful for rational thinking

2 malcolmocean 20 June 2016 04:04PM

Expectation is often used to refer to two totally distinct things: entitlement and anticipation. My basic opinion is that entitlement is a rather counterproductive mental stance to have, while anticipations are really helpful for improving your model of the world.

Here are some quick examples to whet your appetite…

1. Consider a parent who says to their teenager: “I expect you to be home by midnight.” The parent may or may not anticipate the teen being home on time (even after this remark). Instead, they’re staking out a right to be annoyed if they aren’t back on time.

Contrast this with someone telling the person they’re meeting for lunch “I expect I’ll be there by 12:10” as a way to let them know that they’re running a little late, so that the recipient of the message knows not to worry that maybe they’re not in the correct meeting spot, or that the other person has forgotten.

2. A slightly more involved example: I have a particular kind of chocolate bar that I buy every week at the grocery store. Or at least I used to, until a few weeks ago when they stopped stocking it. They still stock the Dark version, but not the Extra Dark version I’ve been buying for 3 years. So the last few weeks I’ve been disappointed when I go to look. (Eventually I’ll conclude that it’s gone forever, but for now I remain hopeful.)

There’s a temptation to feel indignant at the absence of this chocolate bar. I had an expectation that it would be there, and it wasn’t! How dare they not stock it? I’m a loyal customer, who shops there every week, and who even tells others about their points card program! I deserve to have my favorite chocolate bar in stock!

…says this voice. This is the voice of entitlement.

The entitlement also wants to not just politely ask a shelf stocker if they have any out back, but to do things like walk up to the customer service desk and demand that they give me a discount on the Dark ones because they’ve been out of the Extra Dark ones for three weeks now. To make a fuss.

Entitlement is the feeling that you have a right to something. That you deserve it. That it’s owed to you.

(Relevant aside: the word “ought” used to be a synonym for “owed”, i.e. the past tense of “to owe”.)

A brief history of entitlement

That’s not what the term “entitlement” used to mean though. It used to refer to not the feeling but simply the fact: that you were owed something. Everyone deserved different things, according to their titles: kings and queens an enormous amount, lords and landowners a lesser though still large amount, and so on down the line. In some cases, people at the bottom of the hierarchy may have in fact been considering deserving of scarcity and suffering.

What changed?

Western culture shifted from exalting rule by one (monarchy) or few (oligarchy) or the rich (plutocracy) to being broadly more democratic, meritocratic, and then ultimately relatively egalitarian, in terms of ideals. What this means is that in modern times, it may be the case that being rich or white does in fact grant someone certain privileges, in the sense that they may in fact be less likely to get arrested, or more likely to get promoted…

…but broadly speaking, mainstream culture will no longer agree that they deserve these privileges. They are no longer entitled to them.

More broadly, nobody is really considered to be entitled to much of anything anymore—oh, except for a bunch of very basic, universal rights. The U.S. Bill of Rights lays out the rights the state grants Americans. The U.N. Declaration of Human Rights lays out the rights that U.N. countries grant everyone. In theory, anyway.

And since we no longer think that people deserve special privileges, anyone who acts like they do is called “entitled”. But now we’re talking about the feeling of entitlement, not actually having the right to some benefit.

Also, note that this isn’t just about class anymore: given the meritocratic context and a few other factors, people sometimes find themselves feeling like they deserve something because they worked hard for it. This isn’t a totally unreasonable way to feel, but the world doesn’t automagically reward people who work hard.

This principle is at play when older generations criticize millennials as being entitled, and then the millennials retort “well you said that if we just got a degree, then we’d have decent careers.” What the millennials are saying is that they had an expectation that they’d have prosperity, if they did a thing.

But are they actually feeling entitled to that thing? Are they relating to it in an entitled way? It’s hard to say, and probably depends on the individual. Let’s take an easier example.

Meet James Altucher

In his article How To Break All The Rules And Get Everything You Want, Altucher describes a multipart story in which he breaks some rules to get what he wants.

We arrived at the “Boy Meets Girl” fashion show and the woman with the clipboard said, “You are not on the list.”

WHAT!?

I had been telling my daughter Mollie all week we would go to this show.

Mollie was very excited.

“Don’t worry,” Nathan had told me earlier in the day, “you will be on the list.” I am extremely grateful he got us invited to the show.

Two more times in the article, James has that “WHAT!?” reaction.

This reaction seems to me to be practically the epitome of an entitlement response: outrage. Particularly when he’s like: WHAT!? You let us in even though we weren’t on the list, but we’re at the back!? Note that the feeling of entitlement is usually not so obvious, even internally.

But note also that it’s possible to act entitled, even if you don’t feel entitled. I posit that we might call this something like “entitled to ask” or “entitled to try”.

To illustrate this, let’s take a response to James’ article called When “Life Hacking” Is Really White Privilege, Jen Dziura writes:

I have often had encounters with men who take something that’s not theirs, and when they encounter no outright resistance — there’s no loud talking, no playground-style tussle — they assume everything is fine.

It is not fine.

Sometimes, you take the best desk for yourself in the new office. Sometimes, you take credit for someone else’s work or ideas. Sometimes, you’re on a team, and someone from the client company assumes that you — the tallest, whitest member — are in charge, and you do not correct them. Sometimes, it’s just that someone baked cookies to congratulate their team on a job well-done, and you’re not on that team but you wanted a cookie, and no one seemed to mind.

I have been the cookie guy. Probably with literal cookies, although probably a different situation—not that I would know, since I was just paying attention to the cookies.

And if someone had refused me the cookies, I wouldn’t have been like “WHAT!?”. I would have said something polite and moved on. But if someone had suggested I was rude for asking, I might have been a bit indignant: “I was just asking…”

But in order to be “just asking”, I also had to be assuming that the person would feel comfortable saying no if my request didn’t make sense. Assuming that giving me a “no” isn’t a costly action. Which is often not a safe assumption, for a myriad of reasons that are outside the scope of this post. But the effect is that even without having a subjective feeling of entitlement to anything in particular, I can be relating to a situation in an entitled way.

But I’m a Nice Guy!

There’s a concept that’s been around for awhile, known as the Nice Guy phenomenon. The basic notion is of a person (canonically male, though not always) becoming frustrated when their attempts to transform a platonic friendship into a romantic and/or sexual relationship fall through, leading to rejection. Feminist circles have sometimes criticized these men as objectifying women, but as Dan Fincke points out, in many cases the men are trying to relate to them deeply.

Still, Dan writes:

They want to earn love with their moral virtues, with their genuine friendship, and with their woman-honoring priorities that put knowing women as people over trying to just bed them.

Uh oh. Trying to earn love is a recipe for the meritocratic flavour of entitlement. Dan again, a little further down:

So at this point we come to the actual entitlement issue. It’s not that they feel entitled to sex—it’s much deeper and less superficial than that and these men deserve the respect of having that acknowledged. What they really feel entitled to is love.

At any rate, there usually is a sense of entitlement here, and it makes for unpleasant interactions when the guy finally shares his feelings for his friend. He has his hopes all up and expects her to reciprocate. (Here we probably have both kinds of expectation going on—entitlement and anticipation.)

Miri at Brute Reason clarifies that the problem isn’t feeling sad when you’re rejected. That’s natural and can make lots of sense. Same with:

  • Wishing the person would change their mind
  • Thinking that you would’ve made a good partner for this person
  • Thinking that you would’ve made a better partner for this person than whoever they’re interested in
  • Feeling embarrassed that you were rejected
  • Feeling like you don’t want to see them or talk to them anymore

Miri distinguishes these from the feeling “I deserve sex/romance from this person because I was their friend.” and goes on to name some actions which follow from this feeling of entitlement. These include:

  • Pressuring the person to change their mind (which isn’t the same as saying “Well, let me know if you ever change your mind” and then stepping back)
  • Guilt-tripping them for rejecting you (which isn’t the same as being honest about your feelings about the rejection)
  • Becoming cruel to the person to get back at them (i.e. “Whatever, I never liked you anyway, you [gendered slur]”)

I think that what Miri has highlighted here is a really solid application of the two channels model: the idea that you can have multiple interpretations of something at the same time, that can be alike in valence (in this case, both negative/hurting) but different in structure and implication—and potentially leading to different actions.

The difference in action can be stark—”Whatever, I never liked you anyway” vs “I still think you’re cool, even if I feel pretty burned.”—or quite subtle… what, you might ask, is the difference between “guilt-tripping someone for rejecting you”, and “being honest about your feelings about the rejection”?

Without the two channels model, we might say that the former is when you’re entitled, and the latter is when you’re not. But the two channels model suggests that it’s more like, guilt-tripping is what happens when your entitlements own you, instead of you owning them.

So you feel entitled? Okay, accept that. Not in the sense of endorsing it, but in the sense of accepting reality as it is. The reality is that you feel entitled. One way to do this while staying outside of the frame is to say something like “so it seems that a bunch of what I’m feeling right now is entitlement”. Either to yourself, or if it makes sense, to share that with the person you’re talking with.

If the guy in this situation talks honestly about his feelings of rejection and loneliness, that could be experienced as guilt-tripping or as making the person take care of him:

I feel really rejected now. It’s so frustrating, like, I’m so unlovable. Forever alone, right here.

But maybe if he’s able to get outside of just being the feelings, and talk about the overarching structure of what’s going on:

“It seems I’m feeling both a sense of rejection, but also like I’ve been setting myself up to feel entitled to your love and affection… and I guess that doesn’t make sense. I’m feeling frustrated and lonely, and at the same time… wanting to not relate to you from there.”

If I try, I can imagine that that phrasing might sound over-the-top to some people, but it’s actually how me and many of my friends talk… and it allows us to navigate tense situations while remaining on the “same side”. We stay on the same side by putting the feelings in the center where they can be talked about, and being clear that the relating doesn’t need to be run by those feelings. I go into more detail about the value of this kind of language here.

I realize that it might not be possible to talk at this level in a given relationship. First of all, it requires the capacity to think thoughts like that when you’re in an emotional state (hint: practice when you’re calm!) Even more challengingly, it requires a certain kind of trust and shared assumptions in the relationship, which may not be available.

With those shared assumptions, much less verbose expressions can still have that same page feeling. Without them, even the most clear articulation can nonetheless be experienced as an attempt at manipulation.

Without a good segue, we now turn to the final section: expectations, entitlements, anticipations, and desire.

Anticipations and Desire

When I was maybe 15, a friend and had a principle we used for navigating relationships with our romantic interests. We would go into a situation with “no intentions and no expectations”. One framing of this is that it was to protect against disappointment, but I think it could also be understood as a defense against the whole entitlement debacle: if I had an “expectation” that me and my crush were going to kiss, but she didn’t want to, well… then what? I wouldn’t kiss her without her consent, but… was it okay to even expect that, if I didn’t know what she wanted?

And so we come back to the breakdown I introduced at the start: expectations as including both anticipations and entitlements. I seriously salute my 15-year-old self for managing to avoid the entitlement-related issues (well, at least in the situations when I remembered to use this principle).

The problem was, in turning off expectations, I had shut off not only entitlements but anticipations as well. And anticipations are important!

First of all, denotationally: from an epistemic perspective, you want to be able to predict what’s going to happen. Not just so that you could remember to bring condoms, but also to have a sense of being prepared psychologically for what sort of situation you might be navigating. Projecting what will happen in the future is important.

Then there’s the second, more connotational part of the term “anticipation”, which is the emotional quality: the pleasure of considering a longed-for event. The book Rekindling Desire contains quotations like:

Anticipation is the central ingredient in sexual desire.
[…] sex has a major cognitive component — the most important element for desire is positive anticipation.

What this means is that if you try to avoid having anticipations, you can end up with a reduced sense of desire. Hormones and curiosity being what they were, this wasn’t an issue for my teenage self on a physical level, but even now I notice a subtle effect that I think has the same roots…

I’ve sometimes found it hard to tap into my sense of what it is that I want in relationships or in physically intimate contexts. I know what feels good in the moment—pleasure gradients aren’t hard—but it’s been challenging to cultivate a sense of taste for the kinds of intimacy I want, and I think that a large part of that is the resistance I have for letting myself cultivate desire through anticipation.

An article published just a few days ago (but after I’d drafted this whole post) touches on how this may be a common phenomenon:

“I want more men to get to know their own bodies and desires. […]

“Feminist men often fall into the trap of thinking that the opposite of male sexual entitlement–the opposite of men using other people’s bodies to get themselves off without any concern for that person’s consent or desire–is to focus entirely on their partner’s pleasure and deny any preferences of their own. No. The opposite of male sexual entitlement is two (or more) people working together–playing together, rather–to create the experiences they want.”

So one conclusion I’m making as part of breaking down expectations into entitlements and anticipations is that I can start doing more anticipating of things, as long as I don’t let myself get trapped in having entitlements as well. As long as I don’t hinge my sense of self-worth on having my expectations fulfilled and on never experiencing rejection. As long as I can remember that having no preferences unsatisfied by way of having no preferences isn’t actually satisfying.

“The gap between vision and current reality is also a source of energy. If there were no gap, there would be no need for any action to move towards the vision. We call this gap creative tension.”
— Peter Senge, The Fifth Discipline

The Two Kinds of Expectations + Rationality

I’ve spent a lot of time talking about how this affects interpersonal dynamics, but I want to briefly note that this distinction matters a lot for thinking quality as well:

Having entitlement-based relationships to people or systems is kind of like writing the bottom line before you know what the argument will be. It’s assuming you know what makes sense or know what will work, even though you don’t have all of the information, and then precommitting to be reluctant to change your mind.

Having anticipations, on the contrary, is fundamental to making your beliefs pay rent: in order for your beliefs to be entangled with the real world, they necessarily must suggest which events to anticipate—and importantly, which events to not anticipate.

There’s a question to, of how expectations show up when trying to coordinate a team (or vague network of people with a shared goal). I think a sports analogy is actually valuable here: if we’re on a soccer team, it’s critical that I can expect that if I pass you the ball in a certain way, you’ll be able to kick it directly at the goal. I need to know this so that I know when to do it, because it’s an effective technique when performed well. But if that expectation is about entitlement rather than anticipation, then that will cause me to be less focused on whether my pass made sense in this situation and more focused on whether I can blame you for missing the shot.

My money’s on the team with anticipation, not the one with entitlement.

This article crossposted from malcolmocean.com.

Why startup founders have mood swings (and why they may have uses)

47 AnnaSalamon 09 December 2015 06:59PM

(This post was collaboratively written together with Duncan Sabien.)

 

Startup founders stereotypically experience some pretty serious mood swings.  One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for.  Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt.  Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.

 

 

 

 

 

Well, sure, you might say.  Running a startup is stressful.  Stress comes with mood swings.  

 

But that’s not really an explanation—it’s like saying stuff falls when you let it go.  There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.

 

continue reading »

A few misconceptions surrounding Roko's basilisk

39 RobbBB 05 October 2015 09:23PM

There's a new LWW page on the Roko's basilisk thought experiment, discussing both Roko's original post and the fallout that came out of Eliezer Yudkowsky banning the topic on Less Wrong discussion threads. The wiki page, I hope, will reduce how much people have to rely on speculation or reconstruction to make sense of the arguments.

While I'm on this topic, I want to highlight points that I see omitted or misunderstood in some online discussions of Roko's basilisk. The first point that people writing about Roko's post often neglect is:

 

  • Roko's arguments were originally posted to Less Wrong, but they weren't generally accepted by other Less Wrong users.

Less Wrong is a community blog, and anyone who has a few karma points can post their own content here. Having your post show up on Less Wrong doesn't require that anyone else endorse it. Roko's basic points were promptly rejected by other commenters on Less Wrong, and as ideas not much seems to have come of them. People who bring up the basilisk on other sites don't seem to be super interested in the specific claims Roko made either; discussions tend to gravitate toward various older ideas that Roko cited (e.g., timeless decision theory (TDT) and coherent extrapolated volition (CEV)) or toward Eliezer's controversial moderation action.

In July 2014, David Auerbach wrote a Slate piece criticizing Less Wrong users and describing them as "freaked out by Roko's Basilisk." Auerbach wrote, "Believing in Roko’s Basilisk may simply be a 'referendum on autism'" — which I take to mean he thinks a significant number of Less Wrong users accept Roko’s reasoning, and they do so because they’re autistic (!). But the Auerbach piece glosses over the question of how many Less Wrong users (if any) in fact believe in Roko’s basilisk. Which seems somewhat relevant to his argument...?

The idea that Roko's thought experiment holds sway over some community or subculture seems to be part of a mythology that’s grown out of attempts to reconstruct the original chain of events; and a big part of the blame for that mythology's existence lies on Less Wrong's moderation policies. Because the discussion topic was banned for several years, Less Wrong users themselves had little opportunity to explain their views or address misconceptions. A stew of rumors and partly-understood forum logs then congealed into the attempts by people on RationalWiki, Slate, etc. to make sense of what had happened.

I gather that the main reason people thought Less Wrong users were "freaked out" about Roko's argument was that Eliezer deleted Roko's post and banned further discussion of the topic. Eliezer has since sketched out his thought process on Reddit:

When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post. [...] Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why this was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents [= Eliezer’s early model of indirectly normative agents that reason with ideal aggregated preferences] torturing people who had heard about Roko's idea. [...] What I considered to be obvious common sense was that you did not spread potential information hazards because it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct.

This, obviously, was a bad strategy on Eliezer's part. Looking at the options in hindsight: To the extent it seemed plausible that Roko's argument could be modified and repaired, Eliezer shouldn't have used Roko's post as a teaching moment and loudly chastised him on a public discussion thread. To the extent this didn't seem plausible (or ceased to seem plausible after a bit more analysis), continuing to ban the topic was a (demonstrably) ineffective way to communicate the general importance of handling real information hazards with care.

 


On that note, point number two:

  • Roko's argument wasn’t an attempt to get people to donate to Friendly AI (FAI) research. In fact, the opposite is true.

Roko's original argument was not 'the AI agent will torture you if you don't donate, therefore you should help build such an agent'; his argument was 'the AI agent will torture you if you don't donate, therefore we should avoid ever building such an agent.' As Gerard noted in the ensuing discussion thread, threats of torture "would motivate people to form a bloodthirsty pitchfork-wielding mob storming the gates of SIAI [= MIRI] rather than contribute more money." To which Roko replied: "Right, and I am on the side of the mob with pitchforks. I think it would be a good idea to change the current proposed FAI content from CEV to something that can't use negative incentives on x-risk reducers."

Roko saw his own argument as a strike against building the kind of software agent Eliezer had in mind. Other Less Wrong users, meanwhile, rejected Roko's argument both as a reason to oppose AI safety efforts and as a reason to support AI safety efforts.

Roko's argument was fairly dense, and it continued into the discussion thread. I’m guessing that this (in combination with the temptation to round off weird ideas to the nearest religious trope, plus misunderstanding #1 above) is why RationalWiki's version of Roko’s basilisk gets introduced as

a futurist version of Pascal’s wager; an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward.

If I'm correctly reconstructing the sequence of events: Sites like RationalWiki report in the passive voice that the basilisk is "an argument used" for this purpose, yet no examples ever get cited of someone actually using Roko’s argument in this way. Via citogenesis, the claim then gets incorporated into other sites' reporting.

(E.g., in Outer Places: "Roko is claiming that we should all be working to appease an omnipotent AI, even though we have no idea if it will ever exist, simply because the consequences of defying it would be so great." Or in Business Insider: "So, the moral of this story: You better help the robots make the world a better place, because if the robots find out you didn’t help make the world a better place, then they’re going to kill you for preventing them from making the world a better place.")

In terms of argument structure, the confusion is equating the conditional statement 'P implies Q' with the argument 'P; therefore Q.' Someone asserting the conditional isn’t necessarily arguing for Q; they may be arguing against P (based on the premise that Q is false), or they may be agnostic between those two possibilities. And misreporting about which argument was made (or who made it) is kind of a big deal in this case: 'Bob used a bad philosophy argument to try to extort money from people' is a much more serious charge than 'Bob owns a blog where someone once posted a bad philosophy argument.'

 


Lastly:

  • "Formally speaking, what is correct decision-making?" is an important open question in philosophy and computer science, and formalizing precommitment is an important part of that question.

Moving past Roko's argument itself, a number of discussions of this topic risk misrepresenting the debate's genre. Articles on Slate and RationalWiki strike an informal tone, and that tone can be useful for getting people thinking about interesting science/philosophy debates. On the other hand, if you're going to dismiss a question as unimportant or weird, it's important not to give the impression that working decision theorists are similarly dismissive.

What if your devastating take-down of string theory is intended for consumption by people who have never heard of 'string theory' before? Even if you're sure string theory is hogwash, then, you should be wary of giving the impression that the only people discussing string theory are the commenters on a recreational physics forum. Good reporting by non-professionals, whether or not they take an editorial stance on the topic, should make it obvious that there's academic disagreement about which approach to Newcomblike problems is the right one. The same holds for disagreement about topics like long-term AI risk or machine ethics.

If Roko's original post is of any pedagogical use, it's as an unsuccessful but imaginative stab at drawing out the diverging consequences of our current theories of rationality and goal-directed behavior. Good resources for these issues (both for discussion on Less Wrong and elsewhere) include:

The Roko's basilisk ban isn't in effect anymore, so you're welcome to direct people here (or to the Roko's basilisk wiki page, which also briefly introduces the relevant issues in decision theory) if they ask about it. Particularly low-quality discussions can still get deleted (or politely discouraged), though, at moderators' discretion. If anything here was unclear, you can ask more questions in the comments below.

Experiences in applying "The Biodeterminist's Guide to Parenting"

64 juliawise 17 July 2015 07:19PM

I'm posting this because LessWrong was very influential on how I viewed parenting, particularly the emphasis on helping one's brain work better. In this context, creating and influencing another person's brain is an awesome responsibility.


It turned out to be a lot more anxiety-provoking than I expected. I don't think that's necessarily a bad thing, as the possibility of screwing up someone's brain should make a parent anxious, but it's something to be aware of. I've heard some blithe "Rational parenting could be a very high-impact activity!" statements from childless LWers who may be interested to hear some experiences in actually applying that.


One thing that really scared me about trying to raise a child with the healthiest-possible brain and body was the possibility that I might not love her if she turned out to not be smart. 15 months in, I'm no longer worried. Evolution has been very successful at producing parents and children that love each other despite their flaws, and our family is no exception. Our daughter Lily seems to be doing fine, but if she turns out to have disabilities or other problems, I'm confident that we'll roll with the punches.

 

Cross-posted from The Whole Sky.

 


Before I got pregnant, I read Scott Alexander's (Yvain's) excellent Biodeterminist's Guide to Parenting and was so excited to have this knowledge. I thought how lucky my child would be to have parents who knew and cared about how to protect her from things that would damage her brain.

Real life, of course, got more complicated. It's one thing to intend to avoid neurotoxins, but another to arrive at the grandparents' house and find they've just had ant poison sprayed. What do you do then?


Here are some tradeoffs Jeff and I have made between things that are good for children in one way but bad in another, or things that are good for children but really difficult or expensive.


Germs and parasites


The hygiene hypothesis states that lack of exposure to germs and parasites increases risk of auto-immune disease. Our pediatrician recommended letting Lily playing in the dirt for this reason.


While exposure to animal dander and pollution increase asthma later in life, it seems that being exposed to these in the first year of life actually protects against asthma. Apparently if you're going to live in a house with roaches, you should do it in the first year or not at all.


Except some stuff in dirt is actually bad for you.


Scott writes:

Parasite-infestedness of an area correlates with national IQ at about r = -0.82. The same is true of US states, with a slightly reduced correlation coefficient of -0.67 (p<0.0001). . . . When an area eliminates parasites (like the US did for malaria and hookworm in the early 1900s) the IQ for the area goes up at about the right time.


Living with cats as a child seems to increase risk of schizophrenia, apparently via toxoplasmosis. But in order to catch toxoplasmosis from a cat, you have to eat its feces during the two weeks after it first becomes infected (which it’s most likely to do by eating birds or rodents carrying the disease). This makes me guess that most kids get it through tasting a handful of cat litter, dirt from the yard, or sand from the sandbox rather than simply through cat ownership. We live with indoor cats who don’t seem to be mousers, so I’m not concerned about them giving anyone toxoplasmosis. If we build Lily a sandbox, we’ll keep it covered when not in use.


The evidence is mixed about whether infections like colds during the first year of life increase or decrease your risk of asthma later. After the newborn period, we defaulted to being pretty casual about germ exposure.


Toxins in buildings


Our experiences with lead. Our experiences with mercury.


In some areas, it’s not that feasible to live in a house with zero lead. We live in Boston, where 87% of the housing was built before lead paint was banned. Even in a new building, we’d need to go far out of town before reaching soil that wasn’t near where a lead-painted building had been.


It is possible to do some renovations without exposing kids to lead. Jeff recently did some demolition of walls with lead paint, very carefully sealed off and cleaned up, while Lily and I spent the day elsewhere. Afterwards her lead level was no higher than it had been.


But Jeff got serious lead poisoning as a toddler while his parents did major renovations on their old house. If I didn’t think I could keep the child away from the dust, I wouldn’t renovate.


Recently a house across the street from us was gutted, with workers throwing debris out the windows and creating big plumes of dust (presumable lead-laden) that blew all down the street. Later I realized I should have called city building inspection services, which would have at least made them carry the debris into the dumpster instead of throwing it from the second story.


Floor varnish releases formaldehyde and other nasties as it cures. We kept Lily out of the house for a few weeks after Jeff redid the floors. We found it worthwhile to pay rent at our previous house in order to not have to live in the new house while this kind of work was happening.

 

Pressure-treated wood was treated with arsenic and chromium until around 2004 in the US. It has a greenish tint, though this may have faded with time. Playing on playsets or decks made of such wood increases children's cancer risk. It should not be used for furniture (I thought this would be obvious, but apparently it wasn't to some of my handyman relatives).


I found it difficult to know how to deal with fresh paint and other fumes in my building at work while I was pregnant. Women of reproductive age have a heightened sense of smell, and many pregnant women have heightened aversion to smells, so you can literally smell things some of your coworkers can’t (or don’t mind). The most critical period of development is during the first trimester, when most women aren’t telling the world they’re pregnant (because it’s also the time when a miscarriage is most likely, and if you do lose the pregnancy you might not want to have to tell the world). During that period, I found it difficult to explain why I was concerned about the fumes from the roofing adhesive being used in our building. I didn’t want to seem like a princess who thought she was too good to work in conditions that everybody else found acceptable. (After I told them I was pregnant, my coworkers were very understanding about such things.)


Food


Recommendations usually focus on what you should eat during pregnancy, but obviously children’s brain development doesn’t stop there. I’ve opted to take precautions with the food Lily and I eat for as long as I’m nursing her.


Claims that pesticide residues are poisoning children scare me, although most scientists seem to think the paper cited is overblown. Other sources say the levels of pesticides in conventionally grown produce are fine. We buy organic produce at home but eat whatever we’re served elsewhere.


I would love to see a study with families randomly selected to receive organic produce for the first 8 years of the kids’ lives, then looking at IQ and hyperactivity. But no one’s going to do that study because of how expensive 8 years of organic produce would be.
The Biodeterminist’s Guide doesn’t mention PCBs in the section on fish, but fish (particularly farmed salmon) are a major source of these pollutants. They don’t seem to be as bad as mercury, but are neurotoxic. Unfortunately their half-life in the body is around 14 years, so if you have even a vague idea of getting pregnant ever in your life you shouldn’t be eating farmed salmon (or Atlantic/farmed salmon, bluefish, wild striped bass, white and Atlantic croaker, blackback or winter flounder, summer flounder, or blue crab).


I had the best intentions of eating lots of the right kind of high-omega-3, low-pollutant fish during and after pregnancy. Unfortunately, fish was the only food I developed an aversion to. Now that Lily is eating food on her own, we tried several sources of omega-3 and found that kippered herring was the only success. Lesson: it’s hard to predict what foods kids will eat, so keep trying.


In terms of hassle, I underestimated how long I would be “eating for two” in the sense that anything I put in my body ends up in my child’s body. Counting pre-pregnancy (because mercury has a half-life of around 50 days in the body, so sushi you eat before getting pregnant could still affect your child), pregnancy, breastfeeding, and presuming a second pregnancy, I’ll probably spend about 5 solid years feeding another person via my body, sometimes two children at once. That’s a long time in which you have to consider the effect of every medication, every cup of coffee, every glass of wine on your child. There are hardly any medications considered completely safe during pregnancy and lactationmost things are in Category C, meaning there’s some evidence from animal trials that they may be bad for human children.


Fluoride


Too much fluoride is bad for children’s brains. The CDC recently recommended lowering fluoride levels in municipal water (though apparently because of concerns about tooth discoloration more than neurotoxicity). Around the same time, the American Dental Association began recommending the use of fluoride toothpaste as soon as babies have teeth, rather than waiting until they can rinse and spit.


Cavities are actually a serious problem even in baby teeth, because of the pain and possible infection they cause children. Pulling them messes up the alignment of adult teeth. Drilling on children too young to hold still requires full anesthesia, which is dangerous itself.


But Lily isn’t particularly at risk for cavities. 20% of children get a cavity by age six, and they are disproportionately poor, African-American, and particularly Mexican-American children (presumably because of different diet and less ability to afford dentists). 75% of cavities in children under 5 occur in 8% of the population.


We decided to have Lily brush without toothpaste, avoid juice and other sugary drinks, and see the dentist regularly.


Home pesticides


One of the most commonly applied insecticides makes kids less smart. This isn’t too surprising, given that it kills insects by disabling their nervous system. But it’s not something you can observe on a small scale, so it’s not surprising that the exterminator I talked to brushed off my questions with “I’ve never heard of a problem!”


If you get carpenter ants in your house, you basically have to choose between poisoning them or letting them structurally damage the house. We’ve only seen a few so far, but if the problem progresses, we plan to:

1) remove any rotting wood in the yard where they could be nesting

2) have the perimeter of the building sprayed

3) place gel bait in areas kids can’t access

4) only then spray poison inside the house.


If we have mice we’ll plan to use mechanical traps rather than poison.


Flame retardants


Since the 1970s, California required a high degree of flame-resistance from furniture. This basically meant that US manufacturers sprayed flame retardant chemicals on anything made of polyurethane foam, such as sofas, rug pads, nursing pillows, and baby mattresses.

The law recently changed, due to growing acknowledgement that the carcinogenic and neurotoxic chemicals were more dangerous than the fires they were supposed to be preventing. Even firefighters opposed the use of the flame retardants, because when people die in fires it’s usually from smoke inhalation rather than burns, and firefighters don’t want to breathe the smoke from your toxic sofa (which will eventually catch fire even with the flame retardants).


We’ve opted to use furniture from companies that have stopped using flame retardants (like Ikea and others listed here). Apparently futons are okay if they’re stuffed with cotton rather than foam. We also have some pre-1970s furniture that tested clean for flame retardants. You can get foam samples tested for free.


The main vehicle for children ingesting the flame retardants is that it settles into dust on the floor, and children crawl around in the dust. If you don’t want to get rid of your furniture, frequent damp-mopping would probably help.


The standards for mattresses are so stringent that the chemical sprays aren’t generally used, and instead most mattresses are wrapped in a flame-resistant barrier which apparently isn’t toxic. I contacted the companies that made our mattresses and they’re fine.


Ratings for chemical safety of children’s car seats here.


Thoughts on IQ


A lot of people, when I start talking like this, say things like “Well, I lived in a house with lead paint/played with mercury/etc. and I’m still alive.” And yes, I played with mercury as a child, and Jeff is still one of the smartest people I know even after getting acute lead poisoning as a child.

But I do wonder if my mind would work a little better without the mercury exposure, and if Jeff would have had an easier time in school without the hyperactivity (a symptom of lead exposure). Given the choice between a brain that works a little better and one that works a little worse, who wouldn’t choose the one that works better?


We’ll never know how an individual’s nervous system might have been different with a different childhood. But we can see population-level effects. The Environmental Protection Agency, for example, is fine with calculating the expected benefit of making coal plants stop releasing mercury by looking at the expected gains in terms of children’s IQ and increased earnings.


Scott writes:

A 15 to 20 point rise in IQ, which is a little more than you get from supplementing iodine in an iodine-deficient region, is associated with half the chance of living in poverty, going to prison, or being on welfare, and with only one-fifth the chance of dropping out of high-school (“associated with” does not mean “causes”).


Salkever concludes that for each lost IQ point, males experience a 1.93% decrease in lifetime earnings and females experience a 3.23% decrease. If Lily would earn about what I do, saving her one IQ point would save her $1600 a year or $64000 over her career. (And that’s not counting the other benefits she and others will reap from her having a better-functioning mind!) I use that for perspective when making decisions. $64000 would buy a lot of the posh prenatal vitamins that actually contain iodine, or organic food, or alternate housing while we’re fixing up the new house.


Conclusion


There are times when Jeff and I prioritize social relationships over protecting Lily from everything that might harm her physical development. It’s awkward to refuse to go to someone’s house because of the chemicals they use, or to refuse to eat food we’re offered. Social interactions are good for children’s development, and we value those as well as physical safety. And there are times when I’ve had to stop being so careful because I was getting paralyzed by anxiety (literally perched in the rocker with the baby trying not to touch anything after my in-laws scraped lead paint off the outside of the house).


But we also prioritize neurological development more than most parents, and we hope that will have good outcomes for Lily.

Pattern-botching: when you forget you understand

31 malcolmocean 15 June 2015 10:58PM

It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story:

Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!”

“I don’t care,” she said with a sneer.

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

 

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

 

(Got one?)

 

Here's my take: I pattern-matched a bunch of actual preferences she had with a general "healthy-eating" cluster, and then I went and pulled out something random that felt vaguely associated. It's telling, I think, that I don't even explicitly believe that vegetarianism is healthy. But to my pattern-matcher, they go together nicely.

I'm going to call this pattern-botching.† Pattern-botching is when you pattern-match a thing "X", as following a certain model, but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

†Maybe this already has a name, but I've read a lot of stuff and it feels like a distinct concept to me.

Examples of pattern-botching

So, that's pattern-botching, in a nutshell. Now, examples! We'll start with some simple ones.

Calmness and pretending to be a zen master

In my Againstness Training video, past!me tries a bunch of things to calm down. In the pursuit of "calm", I tried things like...

  • dissociating
  • trying to imitate a zen master
  • speaking really quietly and timidly

None of these are the desired state. The desired state is present, authentic, and can project well while speaking assertively.

But that would require actually being in a different state, which to my brain at the time seemed hard. So my brain constructed a pattern around the target state, and said "what's easy and looks vaguely like this?" and generated the list above. Not as a list, of course! That would be too easy. It generated each one individually as a plausible course of action, which I then tried, and which Val then called me out on.

Personality Types

I'm quite gregarious, extraverted, and generally unflappable by noise and social situations. Many people I know describe themselves as HSPs (Highly Sensitive Persons) or as very introverted, or as "not having a lot of spoons". These concepts are related—or perhaps not related, but at least correlated—but they're not the same. And even if these three terms did all mean the same thing, individual people would still vary in their needs and preferences.

Just this past week, I found myself talking with an HSP friend L, and noting that I didn't really know what her needs were. Like I knew that she was easily startled by loud noises and often found them painful, and that she found motion in her periphery distracting. But beyond that... yeah. So I told her this, in the context of a more general conversation about her HSPness, and I said that I'd like to learn more about her needs.

L responded positively, and suggested we talk about it at some point. I said, "Sure," then added, "though it would be helpful for me to know just this one thing: how would you feel about me asking you about a specific need in the middle of an interaction we're having?"

"I would love that!" she said.

"Great! Then I suspect our future interactions will go more smoothly," I responded. I realized what had happened was that I had conflated L's HSPness with... something else. I'm not exactly sure what, but a preference for indirect communication, perhaps? I have another friend, who is also sometimes short on spoons, who I model as finding that kind of question stressful because it would kind of put them on the spot.

I've only just recently been realizing this, so I suspect that I'm still doing a ton of this pattern-botching with people, that I haven't specifically noticed.

Of course, having clusters makes it easier to have heuristics about what people will do, without knowing them too well. A loose cluster is better than nothing. I think the issue is when we do know the person well, but we're still relying on this cluster-based model of them. It's telling that I was not actually surprised when L said that she would like it if I asked about her needs. On some level I kind of already knew it. But my botched pattern was making me doubt what I knew.

False aversions

CFAR teaches a technique called "Aversion Factoring", in which you try to break down the reasons why you don't do something, and then consider each reason. In some cases, the reasons are sound reasons, so you decide not to try to force yourself to do the thing. If not, then you want to make the reasons go away. There are three types of reasons, with different approaches.

One is for when you have a legitimate issue, and you have to redesign your plan to avert that issue. The second is where the thing you're averse to is real but isn't actually bad, and you can kind of ignore it, or maybe use exposure therapy to get yourself more comfortable with it. The third is... when the outcome would be an issue, but it's not actually a necessary outcome of the thing. As in, it's a fear that's vaguely associated with the thing at hand, but the thing you're afraid of isn't real.

All of these share a structural similarity with pattern-botching, but the third one in particular is a great example. The aversion is generated from a property that the thing you're averse to doesn't actually have. Unlike a miscalibrated aversion (#2 above) it's usually pretty obvious under careful inspection that the fear itself is based on a botched model of the thing you're averse to.

Taking the training wheels off of your model

One other place this structure shows up is in the difference between what something looks like when you're learning it versus what it looks like once you've learned it. Many people learn to ride a bike while actually riding a four-wheeled vehicle: training wheels. I don't think anyone makes the mistake of thinking that the ultimate bike will have training wheels, but in other contexts it's much less obvious.

The remaining three examples look at how pattern-botching shows up in learning contexts, where people implicitly forget that they're only partway there.

Rationality as a way of thinking

CFAR runs 4-day rationality workshops, which currently are evenly split between specific techniques and how to approach things in general. Let's consider what kinds of behaviours spring to mind when someone encounters a problem and asks themselves: "what would be a rational approach to this problem?"

  • someone with a really naïve model, who hasn't actually learned much about applied rationality, might pattern-match "rational" to "hyper-logical", and think "What Would Spock Do?"
  • someone who is somewhat familiar with CFAR and its instructors but who still doesn't know any rationality techniques, might complete the pattern with something that they think of as being archetypal of CFAR-folk: "What Would Anna Salamon Do?"
  • CFAR alumni, especially new ones, might pattern-match "rational" as "using these rationality techniques" and conclude that they need to "goal factor" or "use trigger-action plans"
  • someone who gets rationality would simply apply that particular structure of thinking to their problem

In the case of a bike, we see hundreds of people biking around without training wheels, and so that becomes the obvious example from which we generalize the pattern of "bike". In other learning contexts, though, most people—including, sometimes, the people at the leading edge—are still in the early learning phases, so the training wheels are the rule, not the exception.

So people start thinking that the figurative bikes are supposed to have training wheels.

Incidentally, this can also be the grounds for strawman arguments where detractors of the thing say, "Look at these bikes [with training wheels]! How are you supposed to get anywhere on them?!"

Effective Altruism

We potentially see a similar effect with topics like Effective Altruism. It's a movement that is still in its infancy, which means that nobody has it all figured out. So when trying to answer "How do I be an effective altruist?" our pattern-matchers might pull up a bunch of examples of things that EA-identified people have been commonly observed to do.

  • donating 10% of one's income to a strategically selected charity
  • going to a coding bootcamp and switching careers, in order to Earn to Give
  • starting a new organization to serve an unmet need, or to serve a need more efficiently
  • supporting the Against Malaria Fund

...and this generated list might be helpful for various things, but be wary of thinking that it represents what Effective Altruism is. It's possible—it's almost inevitable—that we don't actually know what the most effective interventions are yet. We will potentially never actually know, but we can expect that in the future we will generally know more than at present. Which means that the current sampling of good EA behaviours likely does not actually even cluster around the ultimate set of behaviours we might expect.

Creating a new (platform for) culture

At my intentional community in Waterloo, we're building a new culture. But that's actually a by-product: our goal isn't to build this particular culture but to build a platform on which many cultures can be built. It's like how as a company you don't just want to be building the product but rather building the company itself, or "the machine that builds the product,” as Foursquare founder Dennis Crowley puts it.

What I started to notice though, is that we started to confused the particular, transitionary culture that we have at our house, with either (a) the particular, target culture, that we're aiming for, or (b) the more abstract range of cultures that will be constructable on our platform.

So from a training wheels perspective, we might totally eradicate words like "should". I did this! It was really helpful. But once I had removed the word from my idiolect, it became unhelpful to still be treating it as being a touchy word. Then I heard my mentor use it, and I remembered that the point of removing the word wasn't to not ever use it, but to train my brain to think without a particular structure that "should" represented.

This shows up on much larger scales too. Val from CFAR was talking about a particular kind of fierceness, "hellfire", that he sees as fundamental and important, and he noted that it seemed to be incompatible with the kind of culture my group is building. I initially agreed with him, which was kind of dissonant for my brain, but then I realized that hellfire was only incompatible with our training culture, not the entire set of cultures that could ultimately be built on our platform. That is, engaging with hellfire would potentially interfere with the learning process, but it's not ultimately proscribed by our culture platform.

Conscious cargo-culting

I think it might be helpful to repeat the definition:

Pattern-botching is you pattern-match a thing "X", as following a certain model, but then but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

It's kind of like if you were doing a cargo-cult, except you knew how airplanes worked.

(Cross-posted from malcolmocean.com)

16 types of useful predictions

90 Julia_Galef 10 April 2015 03:31AM

How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out.

And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.

I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about.

At this point I should clarify that there are two main goals predictions can help with:

  1. Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought). 
  2. Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time)

If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or  "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work.

But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me.

So I think the difficulty in prediction-making is this: The set {questions whose answers you can easily look up, or otherwise obtain} is a small subset of all possible questions. And the set {questions whose answers I care about} is also a small subset of all possible questions. And the intersection between those two subsets is much smaller still, and not easily identifiable. As a result, prediction-making tends to seem too effortful, or not fruitful enough to justify the effort it requires.

But the intersection's not empty. It just requires some strategic thought to determine which answerable questions have some bearing on issues you care about, or -- approaching the problem from the opposite direction -- how to take issues you care about and turn them into answerable questions.

I've been making a concerted effort to hunt for members of that intersection. Here are 16 types of predictions that I personally use to improve my judgment on issues I care about. (I'm sure there are plenty more, though, and hope you'll share your own as well.)

  1. Predict how long a task will take you. This one's a given, considering how common and impactful the planning fallacy is. 
    Examples: "How long will it take to write this blog post?" "How long until our company's profitable?"
  2. Predict how you'll feel in an upcoming situation. Affective forecasting – our ability to predict how we'll feel – has some well known flaws. 
    Examples: "How much will I enjoy this party?" "Will I feel better if I leave the house?" "If I don't get this job, will I still feel bad about it two weeks later?"
  3. Predict your performance on a task or goal. 
    One thing this helps me notice is when I've been trying the same kind of approach repeatedly without success. Even just the act of making the prediction can spark the realization that I need a better game plan.
    Examples: "Will I stick to my workout plan for at least a month?" "How well will this event I'm organizing go?" "How much work will I get done today?" "Can I successfully convince Bob of my opinion on this issue?" 
  4. Predict how your audience will react to a particular social media post (on Facebook, Twitter, Tumblr, a blog, etc.).
    This is a good way to hone your judgment about how to create successful content, as well as your understanding of your friends' (or readers') personalities and worldviews.
    Examples: "Will this video get an unusually high number of likes?" "Will linking to this article spark a fight in the comments?" 
  5. When you try a new activity or technique, predict how much value you'll get out of it.
    I've noticed I tend to be inaccurate in both directions in this domain. There are certain kinds of life hacks I feel sure are going to solve all my problems (and they rarely do). Conversely, I am overly skeptical of activities that are outside my comfort zone, and often end up pleasantly surprised once I try them.
    Examples: "How much will Pomodoros boost my productivity?" "How much will I enjoy swing dancing?"
  6. When you make a purchase, predict how much value you'll get out of it.
    Research on money and happiness shows two main things: (1) as a general rule, money doesn't buy happiness, but also that (2) there are a bunch of exceptions to this rule. So there seems to be lots of potential to improve your prediction skill here, and spend your money more effectively than the average person.
    Examples: "How much will I wear these new shoes?" "How often will I use my club membership?" "In two months, will I think it was worth it to have repainted the kitchen?" "In two months, will I feel that I'm still getting pleasure from my new car?"
  7. Predict how someone will answer a question about themselves.
    I often notice assumptions I'm been making about other people, and I like to check those assumptions when I can. Ideally I get interesting feedback both about the object-level question, and about my overall model of the person.
    Examples: "Does it bother you when our meetings run over the scheduled time?" "Did you consider yourself popular in high school?" "Do you think it's okay to lie in order to protect someone's feelings?"
  8. Predict how much progress you can make on a problem in five minutes.
    I often have the impression that a problem is intractable, or that I've already worked on it and have considered all of the obvious solutions. But then when I decide (or when someone prompts me) to sit down and brainstorm for five minutes, I am surprised to come away with a promising new approach to the problem.  
    Example: "I feel like I've tried everything to fix my sleep, and nothing works. If I sit down now and spend five minutes thinking, will I be able to generate at least one new idea that's promising enough to try?"
  9. Predict whether the data in your memory supports your impression.
    Memory is awfully fallible, and I have been surprised at how often I am unable to generate specific examples to support a confident impression of mine (or how often the specific examples I generate actually contradict my impression).
    Examples: "I have the impression that people who leave academia tend to be glad they did. If I try to list a bunch of the people I know who left academia, and how happy they are, what will the approximate ratio of happy/unhappy people be?"
    "It feels like Bob never takes my advice. If I sit down and try to think of examples of Bob taking my advice, how many will I be able to come up with?" 
  10. Pick one expert source and predict how they will answer a question.
    This is a quick shortcut to testing a claim or settling a dispute.
    Examples: "Will Cochrane Medical support the claim that Vitamin D promotes hair growth?" "Will Bob, who has run several companies like ours, agree that our starting salary is too low?" 
  11. When you meet someone new, take note of your first impressions of him. Predict how likely it is that, once you've gotten to know him better, you will consider your first impressions of him to have been accurate.
    A variant of this one, suggested to me by CFAR alum Lauren Lee, is to make predictions about someone before you meet him, based on what you know about him ahead of time.
    Examples: "All I know about this guy I'm about to meet is that he's a banker; I'm moderately confident that he'll seem cocky." "Based on the one conversation I've had with Lisa, she seems really insightful – I predict that I'll still have that impression of her once I know her better."
  12. Predict how your Facebook friends will respond to a poll.
    Examples: I often post social etiquette questions on Facebook. For example, I recently did a poll asking, "If a conversation is going awkwardly, does it make things better or worse for the other person to comment on the awkwardness?" I confidently predicted most people would say "worse," and I was wrong.
  13. Predict how well you understand someone's position by trying to paraphrase it back to him.
    The illusion of transparency is pernicious.
    Examples: "You said you think running a workshop next month is a bad idea; I'm guessing you think that's because we don't have enough time to advertise, is that correct?"
    "I know you think eating meat is morally unproblematic; is that because you think that animals don't suffer?"
  14. When you have a disagreement with someone, predict how likely it is that a neutral third party will side with you after the issue is explained to her.
    For best results, don't reveal which of you is on which side when you're explaining the issue to your arbiter.
    Example: "So, at work today, Bob and I disagreed about whether it's appropriate for interns to attend hiring meetings; what do you think?"
  15. Predict whether a surprising piece of news will turn out to be true.
    This is a good way to hone your bullshit detector and improve your overall "common sense" models of the world.
    Examples: "This headline says some scientists uploaded a worm's brain -- after I read the article, will the headline seem like an accurate representation of what really happened?"
    "This viral video purports to show strangers being prompted to kiss; will it turn out to have been staged?"
  16. Predict whether a quick online search will turn up any credible sources supporting a particular claim.
    Example: "Bob says that watches always stop working shortly after he puts them on – if I spend a few minutes searching online, will I be able to find any credible sources saying that this is a real phenomenon?"

I have one additional, general thought on how to get the most out of predictions:

Rationalists tend to focus on the importance of objective metrics. And as you may have noticed, a lot of the examples I listed above fail that criterion. For example, "Predict whether a fight will break out in the comments? Well, there's no objective way to say whether something officially counts as a 'fight' or not…" Or, "Predict whether I'll be able to find credible sources supporting X? Well, who's to say what a credible source is, and what counts as 'supporting' X?"

And indeed, objective metrics are preferable, all else equal. But all else isn't equal. Subjective metrics are much easier to generate, and they're far from useless. Most of the time it will be clear enough, once you see the results, whether your prediction basically came true or not -- even if you haven't pinned down a precise, objectively measurable success criterion ahead of time. Usually the result will be a common sense "yes," or a common sense "no." And sometimes it'll be "um...sort of?", but that can be an interestingly surprising result too, if you had strongly predicted the results would point clearly one way or the other. 

Along similar lines, I usually don't assign numerical probabilities to my predictions. I just take note of where my confidence falls on a qualitative "very confident," "pretty confident," "weakly confident" scale (which might correspond to something like 90%/75%/60% probabilities, if I had to put numbers on it).

There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions. But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task. In other words, don't let the perfect be the enemy of the good. Or in other other words: the biggest problem with your predictions right now is that they don't exist.

Dissolving philosophy

2 [deleted] 26 May 2015 10:45AM

Summary: a large chunk of the history of Western philosophy is about finding out by what kinds of less conscious algorithms does the human mind arrive to certain intuitions. 

In Plato's Republic, Socrates runs around Athens talking with people, trying to find an answer to the question: "What is justice?" Two and half thousands of years later we still don't have a truly definitive answer. We can spend another thousand year or two pondering it, but I suspect it would be better to reformulate the question in a more answerable way. So let's look at what Socrates is trying to do here, what his method is and what is actual question is!

It is not an empirical, scientific question that can be answered by observing something whose existence is independent of the human mind. Rather the question is about a feature of the human mind, not of a feature of the external reality out there. 

However Socrates is not simply conducting an opinion survey. He is not content simply finding 74% of Athenians think justice means obeying laws. Socrates also argues against definitions of justice he considers _wrong_. 


So, apparently, justice in this question relates to something that does not exist outside the human mind, but we can still have wrong opinions about it.

The method Socrates is employing is the following. He assumes when people see an actual action, they can intuitively judge it just or unjust and that judgement will be seen as _correct_. Well, not always, but at least when they are dispassionate, and have no vested interest either. So according to Socrates, any definition of justice can be tested by thought experiments that are sufficiently dispassionate and disinterested for the audience that they will actually use their Justice Sensors to form a judgement about them, and not, say, their passion like anger or greed, or their interests.

What Socrates is doing here, then, is asking people to make an algorithm that predicts what acts will a dispassionate and disinterested observer find just or unjust.

Example: "I think justice is paying debts." "Okay dude, but what if you borrowed a sword from a friend and now you see he is really mad at people and wants to go on a murderous rampage. Would it be just / righteous / correct to pay the debt and return the sword now?" "Uh, no."

This means: "I propose this algorithm." "This algorithm predicts you would find hypothetical situation X  just. Would you?" "Uh, no."

The big question: is he looking for any algorithm that _happens_ to predict human intuitions of justice, or looking for the algorithm the human brain _actually_ uses? Well, they probably did not know much about algorithms back then, and they considered the brain an organ for  cooling the blood but from our own angle, since we know the brain uses algorithms, any algorithm that predicts really well what another algorithm does is more or less the same algorithm.

So, "What is justice?" roughly means this: "What algorithm does our brain use when we intuitively consider something just or unjust?"

I am not claiming you can reduce all of philosophy to this, but apparently a significant chunk of Western philosophy ("footnotes to Plato") you can. 

If we see philosophy this, we can also see better how does it overlap with yet why is it distinct from science. The basic ideas are the same: propose hypotheses, test them with (thought) experiments. The difference is that science is focused on looking outward, on the observable reality outside the mind. When science wants to learn about the brain, it invariably treats it as an external object and manipulates and observes it so, for example, looking at what areas of neurons light up under an fMRI scan.

Philosophy is, apparently, a form of cognitive science, a way of learning about the brain that looks inward, not outward, here the experimenters observes his own brain from the inside, and generally tries to consciously notice the subconscious algorithms his brain works with.


This is also why philosophy can feel so "truthy" on the gut level. You can have these kinds of "I knew it! I knew it all along, dammit, just did not connect the dots!" types of euphoric heureka experiences (or: "how could I have been so stupid" types of experiences) far more often in philosophy or math than in the empirical sciences such a biology, because here you study how your own brain works and you study it from the inside. It is about one part of your brain learning how the other part works. (OK, phyiscs is empirical enough and yet it happens. But the point is, it does not really happen in the empirical part of physics like measuring the weight of a particle. It happens in the mathemathical parts of physics.)

Who are your favorite "hidden rationalists"?

18 aarongertler 11 January 2015 06:26AM

Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.

I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.

So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!

Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?

 

Here's my list, to kick things off:

 

  • Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
  • Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
  • Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life

Finally, I'll mention something many more people are probably aware of: I Am A, where people with interesting lives and experiences answer questions about those things. Few sites are better for broadening one's horizons; lots of concentrated honesty. Plus, the chance to update on beliefs you didn't even know you had.



Once more: Who are the people/sources who give you the same feeling you get when you read your favorite LW posts, but who many of us probably haven't heard of?

 

[Link] Neural networks trained on expert Go games have just made a major leap

15 ESRogs 02 January 2015 03:48PM

From the arXiv:

Move Evaluation in Go Using Deep Convolutional Neural Networks

Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver

The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.

This approach looks like it could be combined with MCTS. Here's their conclusion:

In this work, we showed that large deep convolutional neural networks can predict the next move made by Go experts with an accuracy that exceeds previous methods by a large margin, approximately matching human performance. Furthermore, this predictive accuracy translates into much stronger move evaluation and playing strength than has previously been possible. Without any search, the network is able to outperform traditional search based programs such as GnuGo, and compete with state-of-the-art MCTS programs such as Pachi and Fuego.

In Figure 2 we present a sample game played by the 12-layer CNN (with no search) versus Fuego (searching 100K rollouts per move) which was won by the neural network player. It is clear that the neural network has implicitly understood many sophisticated aspects of Go, including good shape (patterns that maximise long term effectiveness of stones), Fuseki (opening sequences), Joseki (corner patterns), Tesuji (tactical patterns), Ko fights (intricate tactical battles involving repeated recapture of the same stones), territory (ownership of points), and influence (long-term potential for territory). It is remarkable that a single, unified, straightforward architecture can master these elements of the game to such a degree, and without any explicit lookahead.

On the other hand, we note that the network still has weaknesses: notably it sometimes fails to understand the global picture, behaving as if the life and death status of large groups has been incorrectly assessed. Interestingly, it is precisely these global aspects of the game for which Monte-Carlo search excels, suggesting that these two techniques may be largely complementary. We have provided a preliminary proof-of-concept that MCTS and deep neural networks may be combined effectively. It appears that we now have two core elements that scale effectively with increased computational resource: scalable planning, using Monte-Carlo search; and scalable evaluation functions, using deep neural networks. In the future, as parallel computation units such as GPUs continue to increase in performance, we believe that this trajectory of research will lead to considerably stronger programs than are currently possible.

H/T: Ken Regan

Edit -- see also: Teaching Deep Convolutional Neural Networks to Play Go (also published to the arXiv in December 2014), and Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time (MIT Technology Review article)

View more: Next