Two kinds of Expectations, *one* of which is helpful for rational thinking

2 malcolmocean 20 June 2016 04:04PM

Expectation is often used to refer to two totally distinct things: entitlement and anticipation. My basic opinion is that entitlement is a rather counterproductive mental stance to have, while anticipations are really helpful for improving your model of the world.

Here are some quick examples to whet your appetite…

1. Consider a parent who says to their teenager: “I expect you to be home by midnight.” The parent may or may not anticipate the teen being home on time (even after this remark). Instead, they’re staking out a right to be annoyed if they aren’t back on time.

Contrast this with someone telling the person they’re meeting for lunch “I expect I’ll be there by 12:10” as a way to let them know that they’re running a little late, so that the recipient of the message knows not to worry that maybe they’re not in the correct meeting spot, or that the other person has forgotten.

2. A slightly more involved example: I have a particular kind of chocolate bar that I buy every week at the grocery store. Or at least I used to, until a few weeks ago when they stopped stocking it. They still stock the Dark version, but not the Extra Dark version I’ve been buying for 3 years. So the last few weeks I’ve been disappointed when I go to look. (Eventually I’ll conclude that it’s gone forever, but for now I remain hopeful.)

There’s a temptation to feel indignant at the absence of this chocolate bar. I had an expectation that it would be there, and it wasn’t! How dare they not stock it? I’m a loyal customer, who shops there every week, and who even tells others about their points card program! I deserve to have my favorite chocolate bar in stock!

…says this voice. This is the voice of entitlement.

The entitlement also wants to not just politely ask a shelf stocker if they have any out back, but to do things like walk up to the customer service desk and demand that they give me a discount on the Dark ones because they’ve been out of the Extra Dark ones for three weeks now. To make a fuss.

Entitlement is the feeling that you have a right to something. That you deserve it. That it’s owed to you.

(Relevant aside: the word “ought” used to be a synonym for “owed”, i.e. the past tense of “to owe”.)

A brief history of entitlement

That’s not what the term “entitlement” used to mean though. It used to refer to not the feeling but simply the fact: that you were owed something. Everyone deserved different things, according to their titles: kings and queens an enormous amount, lords and landowners a lesser though still large amount, and so on down the line. In some cases, people at the bottom of the hierarchy may have in fact been considering deserving of scarcity and suffering.

What changed?

Western culture shifted from exalting rule by one (monarchy) or few (oligarchy) or the rich (plutocracy) to being broadly more democratic, meritocratic, and then ultimately relatively egalitarian, in terms of ideals. What this means is that in modern times, it may be the case that being rich or white does in fact grant someone certain privileges, in the sense that they may in fact be less likely to get arrested, or more likely to get promoted…

…but broadly speaking, mainstream culture will no longer agree that they deserve these privileges. They are no longer entitled to them.

More broadly, nobody is really considered to be entitled to much of anything anymore—oh, except for a bunch of very basic, universal rights. The U.S. Bill of Rights lays out the rights the state grants Americans. The U.N. Declaration of Human Rights lays out the rights that U.N. countries grant everyone. In theory, anyway.

And since we no longer think that people deserve special privileges, anyone who acts like they do is called “entitled”. But now we’re talking about the feeling of entitlement, not actually having the right to some benefit.

Also, note that this isn’t just about class anymore: given the meritocratic context and a few other factors, people sometimes find themselves feeling like they deserve something because they worked hard for it. This isn’t a totally unreasonable way to feel, but the world doesn’t automagically reward people who work hard.

This principle is at play when older generations criticize millennials as being entitled, and then the millennials retort “well you said that if we just got a degree, then we’d have decent careers.” What the millennials are saying is that they had an expectation that they’d have prosperity, if they did a thing.

But are they actually feeling entitled to that thing? Are they relating to it in an entitled way? It’s hard to say, and probably depends on the individual. Let’s take an easier example.

Meet James Altucher

In his article How To Break All The Rules And Get Everything You Want, Altucher describes a multipart story in which he breaks some rules to get what he wants.

We arrived at the “Boy Meets Girl” fashion show and the woman with the clipboard said, “You are not on the list.”

WHAT!?

I had been telling my daughter Mollie all week we would go to this show.

Mollie was very excited.

“Don’t worry,” Nathan had told me earlier in the day, “you will be on the list.” I am extremely grateful he got us invited to the show.

Two more times in the article, James has that “WHAT!?” reaction.

This reaction seems to me to be practically the epitome of an entitlement response: outrage. Particularly when he’s like: WHAT!? You let us in even though we weren’t on the list, but we’re at the back!? Note that the feeling of entitlement is usually not so obvious, even internally.

But note also that it’s possible to act entitled, even if you don’t feel entitled. I posit that we might call this something like “entitled to ask” or “entitled to try”.

To illustrate this, let’s take a response to James’ article called When “Life Hacking” Is Really White Privilege, Jen Dziura writes:

I have often had encounters with men who take something that’s not theirs, and when they encounter no outright resistance — there’s no loud talking, no playground-style tussle — they assume everything is fine.

It is not fine.

Sometimes, you take the best desk for yourself in the new office. Sometimes, you take credit for someone else’s work or ideas. Sometimes, you’re on a team, and someone from the client company assumes that you — the tallest, whitest member — are in charge, and you do not correct them. Sometimes, it’s just that someone baked cookies to congratulate their team on a job well-done, and you’re not on that team but you wanted a cookie, and no one seemed to mind.

I have been the cookie guy. Probably with literal cookies, although probably a different situation—not that I would know, since I was just paying attention to the cookies.

And if someone had refused me the cookies, I wouldn’t have been like “WHAT!?”. I would have said something polite and moved on. But if someone had suggested I was rude for asking, I might have been a bit indignant: “I was just asking…”

But in order to be “just asking”, I also had to be assuming that the person would feel comfortable saying no if my request didn’t make sense. Assuming that giving me a “no” isn’t a costly action. Which is often not a safe assumption, for a myriad of reasons that are outside the scope of this post. But the effect is that even without having a subjective feeling of entitlement to anything in particular, I can be relating to a situation in an entitled way.

But I’m a Nice Guy!

There’s a concept that’s been around for awhile, known as the Nice Guy phenomenon. The basic notion is of a person (canonically male, though not always) becoming frustrated when their attempts to transform a platonic friendship into a romantic and/or sexual relationship fall through, leading to rejection. Feminist circles have sometimes criticized these men as objectifying women, but as Dan Fincke points out, in many cases the men are trying to relate to them deeply.

Still, Dan writes:

They want to earn love with their moral virtues, with their genuine friendship, and with their woman-honoring priorities that put knowing women as people over trying to just bed them.

Uh oh. Trying to earn love is a recipe for the meritocratic flavour of entitlement. Dan again, a little further down:

So at this point we come to the actual entitlement issue. It’s not that they feel entitled to sex—it’s much deeper and less superficial than that and these men deserve the respect of having that acknowledged. What they really feel entitled to is love.

At any rate, there usually is a sense of entitlement here, and it makes for unpleasant interactions when the guy finally shares his feelings for his friend. He has his hopes all up and expects her to reciprocate. (Here we probably have both kinds of expectation going on—entitlement and anticipation.)

Miri at Brute Reason clarifies that the problem isn’t feeling sad when you’re rejected. That’s natural and can make lots of sense. Same with:

  • Wishing the person would change their mind
  • Thinking that you would’ve made a good partner for this person
  • Thinking that you would’ve made a better partner for this person than whoever they’re interested in
  • Feeling embarrassed that you were rejected
  • Feeling like you don’t want to see them or talk to them anymore

Miri distinguishes these from the feeling “I deserve sex/romance from this person because I was their friend.” and goes on to name some actions which follow from this feeling of entitlement. These include:

  • Pressuring the person to change their mind (which isn’t the same as saying “Well, let me know if you ever change your mind” and then stepping back)
  • Guilt-tripping them for rejecting you (which isn’t the same as being honest about your feelings about the rejection)
  • Becoming cruel to the person to get back at them (i.e. “Whatever, I never liked you anyway, you [gendered slur]”)

I think that what Miri has highlighted here is a really solid application of the two channels model: the idea that you can have multiple interpretations of something at the same time, that can be alike in valence (in this case, both negative/hurting) but different in structure and implication—and potentially leading to different actions.

The difference in action can be stark—”Whatever, I never liked you anyway” vs “I still think you’re cool, even if I feel pretty burned.”—or quite subtle… what, you might ask, is the difference between “guilt-tripping someone for rejecting you”, and “being honest about your feelings about the rejection”?

Without the two channels model, we might say that the former is when you’re entitled, and the latter is when you’re not. But the two channels model suggests that it’s more like, guilt-tripping is what happens when your entitlements own you, instead of you owning them.

So you feel entitled? Okay, accept that. Not in the sense of endorsing it, but in the sense of accepting reality as it is. The reality is that you feel entitled. One way to do this while staying outside of the frame is to say something like “so it seems that a bunch of what I’m feeling right now is entitlement”. Either to yourself, or if it makes sense, to share that with the person you’re talking with.

If the guy in this situation talks honestly about his feelings of rejection and loneliness, that could be experienced as guilt-tripping or as making the person take care of him:

I feel really rejected now. It’s so frustrating, like, I’m so unlovable. Forever alone, right here.

But maybe if he’s able to get outside of just being the feelings, and talk about the overarching structure of what’s going on:

“It seems I’m feeling both a sense of rejection, but also like I’ve been setting myself up to feel entitled to your love and affection… and I guess that doesn’t make sense. I’m feeling frustrated and lonely, and at the same time… wanting to not relate to you from there.”

If I try, I can imagine that that phrasing might sound over-the-top to some people, but it’s actually how me and many of my friends talk… and it allows us to navigate tense situations while remaining on the “same side”. We stay on the same side by putting the feelings in the center where they can be talked about, and being clear that the relating doesn’t need to be run by those feelings. I go into more detail about the value of this kind of language here.

I realize that it might not be possible to talk at this level in a given relationship. First of all, it requires the capacity to think thoughts like that when you’re in an emotional state (hint: practice when you’re calm!) Even more challengingly, it requires a certain kind of trust and shared assumptions in the relationship, which may not be available.

With those shared assumptions, much less verbose expressions can still have that same page feeling. Without them, even the most clear articulation can nonetheless be experienced as an attempt at manipulation.

Without a good segue, we now turn to the final section: expectations, entitlements, anticipations, and desire.

Anticipations and Desire

When I was maybe 15, a friend and had a principle we used for navigating relationships with our romantic interests. We would go into a situation with “no intentions and no expectations”. One framing of this is that it was to protect against disappointment, but I think it could also be understood as a defense against the whole entitlement debacle: if I had an “expectation” that me and my crush were going to kiss, but she didn’t want to, well… then what? I wouldn’t kiss her without her consent, but… was it okay to even expect that, if I didn’t know what she wanted?

And so we come back to the breakdown I introduced at the start: expectations as including both anticipations and entitlements. I seriously salute my 15-year-old self for managing to avoid the entitlement-related issues (well, at least in the situations when I remembered to use this principle).

The problem was, in turning off expectations, I had shut off not only entitlements but anticipations as well. And anticipations are important!

First of all, denotationally: from an epistemic perspective, you want to be able to predict what’s going to happen. Not just so that you could remember to bring condoms, but also to have a sense of being prepared psychologically for what sort of situation you might be navigating. Projecting what will happen in the future is important.

Then there’s the second, more connotational part of the term “anticipation”, which is the emotional quality: the pleasure of considering a longed-for event. The book Rekindling Desire contains quotations like:

Anticipation is the central ingredient in sexual desire.
[…] sex has a major cognitive component — the most important element for desire is positive anticipation.

What this means is that if you try to avoid having anticipations, you can end up with a reduced sense of desire. Hormones and curiosity being what they were, this wasn’t an issue for my teenage self on a physical level, but even now I notice a subtle effect that I think has the same roots…

I’ve sometimes found it hard to tap into my sense of what it is that I want in relationships or in physically intimate contexts. I know what feels good in the moment—pleasure gradients aren’t hard—but it’s been challenging to cultivate a sense of taste for the kinds of intimacy I want, and I think that a large part of that is the resistance I have for letting myself cultivate desire through anticipation.

An article published just a few days ago (but after I’d drafted this whole post) touches on how this may be a common phenomenon:

“I want more men to get to know their own bodies and desires. […]

“Feminist men often fall into the trap of thinking that the opposite of male sexual entitlement–the opposite of men using other people’s bodies to get themselves off without any concern for that person’s consent or desire–is to focus entirely on their partner’s pleasure and deny any preferences of their own. No. The opposite of male sexual entitlement is two (or more) people working together–playing together, rather–to create the experiences they want.”

So one conclusion I’m making as part of breaking down expectations into entitlements and anticipations is that I can start doing more anticipating of things, as long as I don’t let myself get trapped in having entitlements as well. As long as I don’t hinge my sense of self-worth on having my expectations fulfilled and on never experiencing rejection. As long as I can remember that having no preferences unsatisfied by way of having no preferences isn’t actually satisfying.

“The gap between vision and current reality is also a source of energy. If there were no gap, there would be no need for any action to move towards the vision. We call this gap creative tension.”
— Peter Senge, The Fifth Discipline

The Two Kinds of Expectations + Rationality

I’ve spent a lot of time talking about how this affects interpersonal dynamics, but I want to briefly note that this distinction matters a lot for thinking quality as well:

Having entitlement-based relationships to people or systems is kind of like writing the bottom line before you know what the argument will be. It’s assuming you know what makes sense or know what will work, even though you don’t have all of the information, and then precommitting to be reluctant to change your mind.

Having anticipations, on the contrary, is fundamental to making your beliefs pay rent: in order for your beliefs to be entangled with the real world, they necessarily must suggest which events to anticipate—and importantly, which events to not anticipate.

There’s a question to, of how expectations show up when trying to coordinate a team (or vague network of people with a shared goal). I think a sports analogy is actually valuable here: if we’re on a soccer team, it’s critical that I can expect that if I pass you the ball in a certain way, you’ll be able to kick it directly at the goal. I need to know this so that I know when to do it, because it’s an effective technique when performed well. But if that expectation is about entitlement rather than anticipation, then that will cause me to be less focused on whether my pass made sense in this situation and more focused on whether I can blame you for missing the shot.

My money’s on the team with anticipation, not the one with entitlement.

This article crossposted from malcolmocean.com.

Use unique, non-obvious terms for nuanced concepts

18 malcolmocean 20 February 2016 11:25PM

Naming things! Naming things is hard. It's been claimed that it's one of the hardest parts of computer science. Now, this might sound surprising, but one of my favoritely named concepts is Kahneman's System 1 and System 2.

I want you to pause for a few seconds and consider what comes to mind when you read just the bolded phrase above.

Got it?

If you're familiar with the concepts of S1 and S2, then you probably have a pretty rich sense of what I'm talking about. Or perhaps you have a partial notion: "I think it was about..." or something. If you've never been exposed to the concept, then you probably have no idea.

Now, Kahneman could have reasonably named these systems lots of other things, like "emotional cognition" and "rational cognition"... or "fast, automatic thinking" and "slow, deliberate thinking". But now imagine that it had been "emotional and rational cognition" that Kahneman had written about, and the effect on the earlier paragraph.

It would be about the same for those who had studied it in depth, but now those who had heard about it briefly (or maybe at one point knew about the concepts) would be reminded of that one particular contrast between S1 and S2 (emotion/reason) and be primed to think that was the main one, forgetting about all of the other parameters that that distinction seeks to describe. Those who had never heard of Kahneman's research might assume that they basically knew what the terms were about, because they already have a sense of what emotion and reason are.

This is related to a concept known as overshadowing, when a verbal description of a scene can cause eyewitnesses to misremember the details of the scene. Words can disrupt lots of other things too, including our ability to think clearly about concepts.

An example of this in action is Ask and Guess Culture model (and later Tell, and Reveal). People who are trying to use the models become hugely distracted by the particular names of the entities in the model, which only have a rough bearing on the nuanced elements of these cultures. Even after thinking about this a ton myself, I still found myself accidentally assuming that questions an Ask Culture thing.

So "System 1" and "System 2" have several advantages:

  • they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source
  • they don't overshadow people who do know them into assuming that the names contain the most important features

Another example that I think is decent (though not as clean as S1/S2) is Scott Alexander's use of Red Tribe and Blue Tribe to refer to culture clusters that roughly correspond to right and left political leanings in the USA. (For readers in most other countries: the US has their colors backwards... blue is left wing and red is right wing.) The colors make it reasonably easy to associate and remember, but unless you've read the post (or talked with someone who has) you won't necessarily know the jargon.

Jargon vs in-jokes

All of the examples I've listed above are essentially jargon—terminology that isn't available to the general public. I'm generally in favour of jargon! If you want to precisely and concisely convey a concept that doesn't already have its own word, then you have two options.

"Coining new jargon words (neologisms) is an alternative to formulating unusually precise meanings of commonly-heard words when one needs to convey a specific meaning." — fubarobfusco on a LW thread

Doing the latter is often safe when you're in a technical context. "Energy" is a colloquial term, but it also has a precise technical meaning. Since in technical contexts, people will tend assume that all such terms have technical meanings (or even learn said meanings early on) there is little risk of confusion here. Usually.

I'm going to make a case that it's worth treating nuanced concepts like in-jokes: don't make the meaning feel like it's in the term. Now, I'm not sold that this is a good idea all the time, but it seems to have some merit to it. I'm interested in where it works and where it doesn't; don't take this article to suggest I think it's unilaterally good. Let's jam on where it's good.

Communication is built on shared understanding. Much of this comes from the commons: almost all of words you're reading in this blog posts are not words that you and I had to guarantee we both understood with each other, before I could write the post. Sometimes, blog posts (or books, lectures, etc) will contain definitions, or will try to triangulate a concept with examples. The author hopes that the reader will indeed have a similar handle on the word they're using after reading the definition. (The reader may not, of course. Also they they might think they do. Or be confused.)

When you have the chance to interact with someone in real-time, 1-on-1, you can often gauge their understanding because they'll try to paraphrase the thing, and you can usually tell if the thing that they say is the kind of thing someone who understood would say. This is great, because then you can feel confident that you can use that concept as a building block in explaining further concepts.

One common failure mode of communication is when people assume that they're using the same building blocks as each other, when in fact, they're using importantly different concepts. The is the issue that rationalist taboo is designed to combat: forbid use of a confounding word and force the conversationalists to build the concept up from component parts again.

Another way to reduce the occurrence of this sort of thing is to use jargon and in-jokes, because then the person is going to draw a blank if they don't already have the shared understanding. Since you had to be there, and if you weren't, something key is obviously missing.

I once had a long conversation with someone, and we ended up using a lot of the objects we had with us as props when explaining certain concepts. This had the curious effect that if we wanted to reference our shared understanding of the earlier concept, we could refer to the object and it became really clear that it was our shared understanding we were referencing, not some more general thing. So I could say "the banana thing" to refer to him having explored the notion that evilness is a property of the map, not the territory, by remarking that a banana can't be evil but that we can think it evil.

The important thing here is that it felt like it was easier to point clearly at that topic by saying "the banana thing", because we both knew what that was and didn't need to accidentally overshadow it, by saying "the objects aren't evil thing" which might eventually get turned into a catchphrase that seems to contain meaning but never actually contained the critical insight.

This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hanging out with a particular person or small group. When you have a concept to explore, you'd grab an unused toy that seemed to suit it decently well, and then you'd gesture with it while explaining the concept. Then later you could refer to "the pink sparkly ball thing" or simply "this thing" while gesturing at the ball. Possibly, the other person wouldn't remember, or not immediately. But if they did, you could be much more confident that you were on the same page. It's a kind of shared mnemonic handle.

a pink and purple sparkly ball

In some ways, this is already a natural part of human communication: I recall years ago talking to a friend and saying "oh, it's like the thing we talked about on my porch last summer" and she immediately knew what I meant. I'm basically proposing to take it further, by using props or by inventing new words.

Unfortunately, terms often end up losing their nuance, for various reasons. Sometimes this happens because the small concept they were trying to point at happens to be surrounded by a vacuum, so it expands. Other times because of shibboleths and people wanting to use in-group words. Or the words are used playfully and poetically, for humor purposes, which then makes it less clear that they once had a precise meaning.

This suggests there might be a kind of terminological inflation thing going on. And to the extent that signalling by using jargon is anti-inductive, that'll dilute things too.

I think if you're trying to think complex thoughts, it's worth developing specialized language, not just with groups of people, but even in 1-on-1 contexts. Of course, pay attention so you don't use terms with people who totally don't know them.

And this, this developing of shared language beyond what's strictly necessary but still worthwhile... this, perhaps, we might call the pink and purple ball thing.

(this article crossposted from malcolmocean.com)

Less Wrong Study Hall: Now With 100% Less Tinychat

27 malcolmocean 09 November 2015 12:25AM

Eight months ago, I announced that the Less Wrong Study Hall, a virtual coworking space where people do pomodoros together, has moved to Complice. Complice is a software system I made to help people achieve their goals. About 20% of rationalists who've tried it have started using it full-time, which by my math gives signing up positive expected value. Anyway...

What follows is a brief history of the LWSH's development thus far. If you just wanna try it, click here: complice.co/room/lesswrong

By embedding the original tinychat window within a larger page, I let users see what the pomodoro timer was up to as soon as they joined, and the page also doesn't let breaks run overtime because the timer just keeps ticking. Also, users could now show a persistent status of what they're working on.

continue reading »

Ultimatums in the Territory

12 malcolmocean 28 September 2015 10:01PM

When you think of "ultimatums", what comes to mind?

Manipulativeness, maybe? Ultimatums are typically considered a negotiation tactic, and not a very pleasant one.

But there's a different thing that can happen, where an ultimatum is made, but where articulating it isn't a speech act but rather an observation. As in, the ultimatum wasn't created by the act of stating it, but rather, it already existed in some sense.

Some concrete examples: negotiating relationships

I had a tense relationship conversation a few years ago. We'd planned to spend the day together in the park, and I was clearly angsty, so my partner asked me what was going on. I didn't have a good handle on it, but I tried to explain what was uncomfortable for me about the relationship, and how I was confused about what I wanted. After maybe 10 minutes of this, she said, "Look, we've had this conversation before. I don't want to have it again. If we're going to do this relationship, I need you to promise we won't have this conversation again."

I thought about it. I spent a few moments simulating the next months of our relationship. I realized that I totally expected this to come up again, and again. Earlier on, when we'd had the conversation the first time, I hadn't been sure. But it was now pretty clear that I'd have to suppress important parts of myself if I was to keep from having this conversation.

"...yeah, I can't promise that," I said.

"I guess that's it then."

"I guess so."

I think a more self-aware version of me could have recognized, without her prompting, that my discomfort represented an unreconcilable part of the relationship, and that I basically already wanted to break up.

The rest of the day was a bit weird, but it was at least nice that we had resolved this. We'd realized that it was a fact about the world that there wasn't a serious relationship that we could have that we both wanted.

I sensed that when she posed the ultimatum, she wasn't doing it to manipulate me. She was just stating what kind of relationship she was interested in. It's like if you go to a restaurant and try to order a pad thai, and the waiter responds, "We don't have rice noodles or peanut sauce. You either eat somewhere else, or you eat something other than a pad thai."

An even simpler example would be that at the start of one of my relationships, my partner wanted to be monogamous and I wanted to be polyamorous (i.e. I wanted us both to be able to see other people and have other partners). This felt a bit tug-of-war-like, but eventually I realized that actually I would prefer to be single than be in a monogamous relationship.

I expressed this.

It was an ultimatum! "Either you date me polyamorously or not at all." But it wasn't me "just trying to get my way".

I guess the thing about ultimatums in the territory is that there's no bluff to call.

It happened in this case that my partner turned out to be really well-suited for polyamory, and so this worked out really well. We'd decided that if she got uncomfortable with anything, we'd talk about it, and see what made sense. For the most part, there weren't issues, and when there were, the openness of our relationship ended up just being a place where other discomforts were felt, not a generator of disconnection.

Normal ultimatums vs ultimatums in the territory

I use "in the territory" to indicate that this ultimatum isn't just a thing that's said but a thing that is true independently of anything being said. It's a bit of a poetic reference to the map-territory distinction.

No bluffing: preferences are clear

The key distinguishing piece with UITTs is, as I mentioned above, that there's no bluff to call: the ultimatum-maker isn't secretly really really hoping that the other person will choose one option or the other. These are the two best options as far as they can tell. They might have a preference: in the second story above, I preferred a polyamorous relationship to no relationship. But I preferred both of those to a monogamous relationship, and the ultimatum in the territory was me realizing and stating that.

This can actually be expressed formally, using what's called a preference vector. This comes from Keith Hipel at University of Waterloo. If the tables in this next bit doesn't make sense, don't worry about it: all important conclusions are expressed in the text.

First, we'll note that since each of us have two options, a table can be constructed which shows four possible states (numbered 0-3 in the boxes).

    My options
  options insist poly don't insist
Partner
options
offer relationship 3: poly relationship 1: mono relationship
don't offer 2: no relationship 0: (??) no relationship

This representation is sometimes referred to as matrix form or normal form, and has the advantage of making it really clear who controls which state transitions (movements between boxes). Here, my decision controls which column we're in, and my partner's decision controls which row we're in.

Next, we can consider: of these four possible states, which are most and least preferred, by each person? Here's my preferences, ordered from most to least preferred, left to right. The 1s in the boxes mean that the statement on the left is true.

state 3 2 1 0
I insist on polyamory 1 1 0 0
partner offers relationship 1 0 1 0
My preference vector (← preferred)

The order of the states represents my preferences (as I understand them) regardless of what my potential partner's preferences are. I only control movement in the top row (do I insist on polyamory or not). It's possible that they prefer no relationship to a poly relationship, in which case we'll end up in state 2. But I still prefer this state over state 1 (mono relationship) and state 0 (in which I don't ask for polyamory and my partner decides not to date me anyway). So whatever my partners preferences are, I've definitely made a good choice for me, by insisting on polyamory.

This wouldn't be true if I were bluffing (if I preferred state 1 to state 2 but insisted on polyamory anyway). If I preferred 1 to 2, but I bluffed by insisting on polyamory, I would basically be betting on my partner preferring polyamory to no relationship, but this might backfire and get me a no relationship, when both of us (in this hypothetical) would have preferred a monogamous relationship to that. I think this phenomenon is one reason people dislike bluffy ultimatums.

My partner's preferences turned out to be...

state 1 3 2 0
I insist on polyamory 0 1 1 0
partner offers relationship 1 1 0 0
Partner's preference vector (← preferred)

You'll note that they preferred a poly relationship to no relationship, so that's what we got! Although as I said, we didn't assume that everything would go smoothly. We agreed that if this became uncomfortable for my partner, then they would tell me and we'd figure out what to do. Another way to think about this is that after some amount of relating, my partner's preference vector might actually shift such that they preferred no relationship to our polyamorous one. In which case it would no longer make sense for us to be together.

UITTs release tension, rather than creating it

In writing this post, I skimmed a wikihow article about how to give an ultimatum, in which they say:

"Expect a negative reaction. Hardly anyone likes being given an ultimatum. Sometimes it may be just what the listener needs but that doesn't make it any easier to hear."

I don't know how accurate the above is in general. I think they're talking about ultimatums like "either you quit smoking or we break up". I can say that expect that these properties of an ultimatum contribute to the negative reaction:

  • stated angrily or otherwise demandingly
  • more extreme than your actual preferences, because you're bluffing
  • refers to what they need to do, versus your own preferences

So this already sounds like UITTs would have less of a negative reaction.

But I think the biggest reason is that they represent a really clear articulation of what one party wants, which makes it much simpler for the other party to decide what they want to do. Ultimatums in the territory tend to also be more of a realization that you then share, versus a deliberate strategy. And this realization causes a noticeable release of tension in the realizer too.

Let's contrast:

"Either you quit smoking or we break up!"

versus

"I'm realizing that as much as I like our relationship, it's really not working for me to be dating a smoker, so I've decided I'm not going to. Of course, my preferred outcome is that you stop smoking, not that we break up, but I realize that might not make sense for you at this point."

Of course, what's said here doesn't necessarily correspond to the preference vectors shown above. Someone could say the demanding first thing when they actually do have a UITT preference-wise, and someone who's trying to be really NVCy or something might say the sceond thing even though they're actually bluffing and would prefer to . But I think that in general they'll correlate pretty well.

The "realizing" seems similar to what happened to me 2 years ago on my own, when I realized that the territory was issuing me an ultimatum: either you change your habits or you fail at your goals. This is how the world works: your current habits will get you X, and you're declaring you want Y. On one level, it was sad to realize this, because I wanted to both eat lots of chocolate and to have a sixpack. Now this ultimatum is really in the territory.

Another example could be realizing that not only is your job not really working for you, but that it's already not-working to the extent that you aren't even really able to be fully productive. So you don't even have the option of just working a bit longer, because things are only going to get worse at this point. Once you realize that, it can be something of a relief, because you know that even if it's hard, you're going to find something better than your current situation.

Loose ends

More thoughts on the break-up story

One exercise I have left to the reader is creating the preference vectors for the break-up in the first story. HINT: (rot13'd) Vg'f fvzvyne gb gur cersrerapr irpgbef V qvq fubj, jvgu gjb qrpvfvbaf: fur pbhyq vafvfg ba ab shgher fhpu natfgl pbairefngvbaf be abg, naq V pbhyq pbagvahr gur eryngvbafuvc be abg.

An interesting note is that to some extent in that case I wasn't even expressing a preference but merely a prediction that my future self would continue to have this angst if it showed up in the relationship. So this is even more in the territory, in some senses. In my model of the territory, of course, but yeah. You can also think of this sort of as an unconscious ultimatum issued by the part of me that already knew I wanted to break up. It said "it's preferable for me to express angst in this relationship than to have it be angst free. I'd rather have that angst and have it cause a breakup than not have the angst."

Revealing preferences

I think that ultimatums in the territory are also connected to what I've called Reveal Culture (closely related to Tell Culture, but framed differently). Reveal cultures have the assumption that in some fundamental sense we're on the same side, which makes negotiations a very different thing... more of a collaborative design process. So it's very compatible with the idea that you might just clearly articulate your preferences.

Note that there doesn't always exist a UITT to express. In the polyamory example above, if I'd preferred a mono relationship to no relationship, then I would have had no UITT (though I could have bluffed). In this case, it would be much harder for me to express my preferences, because if I leave them unclear then there can be kind of implicit bluffing. And even once articulated, there's still no obvious choice. I prefer this, you prefer that. We need to compromise or something. It does seem clear that, with these preferences, if we don't end up with some relationship at the end, we messed up... but deciding how to resolve it is outside the scope of this post.

Knowing your own preferences is hard

Another topic this post will point at but not explore is: how do you actually figure out what you want? I think this is a mix of skill and process. You can get better at the general skill by practising trying to figure it out (and expressing it / acting on it when you do, and seeing if that works out well). One process I can think of that would be helpful is Gendlin's Focusing. Nate Soares has written about how introspection is hard and to some extent you don't ever actually know what you want: You don't get to know what you're fighting for. But, he notes,

"There are facts about what we care about, but they aren't facts about the stars. They are facts about us."

And they're hard to figure out. But to the extent that we can do so and then act on what we learn, we can get more of what we want, in relationships, in our personal lives, in our careers, and in the world.

(This article crossposted from my personal blog.)

Unlearning shoddy thinking

6 malcolmocean 21 August 2015 03:07AM

School taught me to write banal garbage because people would thumbs-up it anyway. That approach has been interfering with me trying to actually express my plans in writing because my mind keeps simulating some imaginary prof who will look it over and go "ehh, good enough".

Looking good enough isn't actually good enough! I'm trying to build an actual model of the world and a plan that will actually work.

Granted, school isn't necessarily all like this. In mathematics, you need to actually solve the problem. In engineering, you need to actually build something that works. But even in engineering reports, you can get away with a surprising amount of shoddy reasoning. A real example:

Since NodeJS uses the V8 JavaScript engine, it has native support for the common JSON (JavaScript Object Notation) format for data transfer, which means that interoperability between SystemQ and other CompanyX systems can still be fairly straightforward (Jelvis, 2011).

This excerpt is technically totally true, but it's also garbage, especially as a reason to use NodeJS. Sure, JSON is native to JS, but every major web programming language supports JSON. The pressure to provide citable justifications for decisions which were made for reasons more like "I enjoy JavaScript and am skilled with it," produces some deliberately confirmation-biased writing. This is just one pattern—there are many others.

I feel like I need to add a disclaimer here or something: I'm a ringed engineer, and I care a lot about the ethics of design, and I don't think any of my shoddy thinking has put any lives (or well-being, etc) at risk. I also don't believe that any of my shoddy thinking in design reports has violated academic integrity guidelines at my university (e.g. I haven't made up facts or sources).

But a lot of it was still shoddy. Most students are familiar with the process of stating a position, googling for a citation, then citing some expert who happened to agree. And it was shoddy because nothing in the school system was incentivizing me to make it otherwise, and I reasoned it would have cost more to only write stuff that I actually deeply and confidently believed, or to accurately and specifically present my best model of the subject at hand. I was trying to spend as little time and attention as possible working on school things, to free up more time and attention for working on my business, the productivity app Complice.

What I didn't realize was the cost of practising shoddy thinking.

Having finished the last of my school obligations, I've launched myself into some high-level roadmapping for Complice: what's the state of things right now, and where am I headed? And I've discovered a whole bunch of bad thinking habits. It's obnoxious.

I'm glad to be out.

(Aside: I wrote this entire post in April, when I was finished my last assignments & tests. I waited awhile to publish it so that I've now safely graduated. Wasn't super worried, but didn't want to take chances.)

Better Wrong Than Vague

So today.

I was already aware of a certain aversion I had to planning. So I decided to make things a bit easier with this roadmapping document, and base it on one my friend Oliver Habryka had written about his main project. He had created a 27-page outline in google docs, shared it with a bunch of people, and got some really great feedback and other comments. Oliver's introduction includes the following paragraph, which I decided to quote verbatim in mine:

This document was written while continuously repeating the mantra “better wrong than vague” in my head. When I was uncertain of something, I tried to express my uncertainty as precisely as possible, and when I found myself unable to do that, I preferred making bold predictions to vague statements. If you find yourself disagreeing with part of this document, then that means I at least succeeded in being concrete enough to be disagreed with.

In an academic context, at least up to the undergrad level, students are usually incentivized to follow "better vague than wrong". Because if you say something the slightest bit wrong, it'll produce a little "-1" in red ink.

Because if you and the person grading you disagree, a vague claim might be more likely to be interpreted favorably. There's a limit, of course: you usually can't just say "some studies have shown that some people sometimes found X to help". But still.

Practising being "good enough"

Nate Soares has written about the approach of whole-assed half-assing:

Your preferences are not "move rightward on the quality line." Your preferences are to hit the quality target with minimum effort.

If you're trying to pass the class, then pass it with minimum effort. Anything else is wasted motion.

If you're trying to ace the class, then ace it with minimum effort. Anything else is wasted motion.

My last two yearly review blog posts have followed structure of talking about my year on the object level (what I did), the process level (how I did it) and the meta level (my more abstract approach to things). I think it's helpful to apply the same model here.

There are lots of things that humans often wished their neurology naturally optimized for. One thing that it does optimize for though is minimum energy expenditure. This is a good thing! Brains are costly, and they'd have to function less well if they always ran at full power. But this has side effects. Here, the relevant side effect is that, if you practice a certain process for awhile, and it achieves the desired object-level results, you might lose awareness of the bigger picture approach that you're trying to employ.

So in my case, I was practising passing my classes with minimum effort, and not wasting motion, following the meta-level approach of whole-assed half-assing. But while the meta-level approach of "hitting the quality target with minimum effort" is a good one in all domains (some of which will have much, much higher quality targets) the process of doing the bare minimum to create something that doesn't have any obvious glaring flaws, is not a process that you want to be employing in your business. Or in trying to understand anything deeply.

Which I am now learning to do. And, in the process, unlearning the shoddy thinking I've been practising for the last 5 years.

Related LW post: Guessing the Teacher's Password

(This article crossposted from my blog)

Pattern-botching: when you forget you understand

31 malcolmocean 15 June 2015 10:58PM

It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story:

Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!”

“I don’t care,” she said with a sneer.

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

 

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

 

(Got one?)

 

Here's my take: I pattern-matched a bunch of actual preferences she had with a general "healthy-eating" cluster, and then I went and pulled out something random that felt vaguely associated. It's telling, I think, that I don't even explicitly believe that vegetarianism is healthy. But to my pattern-matcher, they go together nicely.

I'm going to call this pattern-botching.† Pattern-botching is when you pattern-match a thing "X", as following a certain model, but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

†Maybe this already has a name, but I've read a lot of stuff and it feels like a distinct concept to me.

Examples of pattern-botching

So, that's pattern-botching, in a nutshell. Now, examples! We'll start with some simple ones.

Calmness and pretending to be a zen master

In my Againstness Training video, past!me tries a bunch of things to calm down. In the pursuit of "calm", I tried things like...

  • dissociating
  • trying to imitate a zen master
  • speaking really quietly and timidly

None of these are the desired state. The desired state is present, authentic, and can project well while speaking assertively.

But that would require actually being in a different state, which to my brain at the time seemed hard. So my brain constructed a pattern around the target state, and said "what's easy and looks vaguely like this?" and generated the list above. Not as a list, of course! That would be too easy. It generated each one individually as a plausible course of action, which I then tried, and which Val then called me out on.

Personality Types

I'm quite gregarious, extraverted, and generally unflappable by noise and social situations. Many people I know describe themselves as HSPs (Highly Sensitive Persons) or as very introverted, or as "not having a lot of spoons". These concepts are related—or perhaps not related, but at least correlated—but they're not the same. And even if these three terms did all mean the same thing, individual people would still vary in their needs and preferences.

Just this past week, I found myself talking with an HSP friend L, and noting that I didn't really know what her needs were. Like I knew that she was easily startled by loud noises and often found them painful, and that she found motion in her periphery distracting. But beyond that... yeah. So I told her this, in the context of a more general conversation about her HSPness, and I said that I'd like to learn more about her needs.

L responded positively, and suggested we talk about it at some point. I said, "Sure," then added, "though it would be helpful for me to know just this one thing: how would you feel about me asking you about a specific need in the middle of an interaction we're having?"

"I would love that!" she said.

"Great! Then I suspect our future interactions will go more smoothly," I responded. I realized what had happened was that I had conflated L's HSPness with... something else. I'm not exactly sure what, but a preference for indirect communication, perhaps? I have another friend, who is also sometimes short on spoons, who I model as finding that kind of question stressful because it would kind of put them on the spot.

I've only just recently been realizing this, so I suspect that I'm still doing a ton of this pattern-botching with people, that I haven't specifically noticed.

Of course, having clusters makes it easier to have heuristics about what people will do, without knowing them too well. A loose cluster is better than nothing. I think the issue is when we do know the person well, but we're still relying on this cluster-based model of them. It's telling that I was not actually surprised when L said that she would like it if I asked about her needs. On some level I kind of already knew it. But my botched pattern was making me doubt what I knew.

False aversions

CFAR teaches a technique called "Aversion Factoring", in which you try to break down the reasons why you don't do something, and then consider each reason. In some cases, the reasons are sound reasons, so you decide not to try to force yourself to do the thing. If not, then you want to make the reasons go away. There are three types of reasons, with different approaches.

One is for when you have a legitimate issue, and you have to redesign your plan to avert that issue. The second is where the thing you're averse to is real but isn't actually bad, and you can kind of ignore it, or maybe use exposure therapy to get yourself more comfortable with it. The third is... when the outcome would be an issue, but it's not actually a necessary outcome of the thing. As in, it's a fear that's vaguely associated with the thing at hand, but the thing you're afraid of isn't real.

All of these share a structural similarity with pattern-botching, but the third one in particular is a great example. The aversion is generated from a property that the thing you're averse to doesn't actually have. Unlike a miscalibrated aversion (#2 above) it's usually pretty obvious under careful inspection that the fear itself is based on a botched model of the thing you're averse to.

Taking the training wheels off of your model

One other place this structure shows up is in the difference between what something looks like when you're learning it versus what it looks like once you've learned it. Many people learn to ride a bike while actually riding a four-wheeled vehicle: training wheels. I don't think anyone makes the mistake of thinking that the ultimate bike will have training wheels, but in other contexts it's much less obvious.

The remaining three examples look at how pattern-botching shows up in learning contexts, where people implicitly forget that they're only partway there.

Rationality as a way of thinking

CFAR runs 4-day rationality workshops, which currently are evenly split between specific techniques and how to approach things in general. Let's consider what kinds of behaviours spring to mind when someone encounters a problem and asks themselves: "what would be a rational approach to this problem?"

  • someone with a really naïve model, who hasn't actually learned much about applied rationality, might pattern-match "rational" to "hyper-logical", and think "What Would Spock Do?"
  • someone who is somewhat familiar with CFAR and its instructors but who still doesn't know any rationality techniques, might complete the pattern with something that they think of as being archetypal of CFAR-folk: "What Would Anna Salamon Do?"
  • CFAR alumni, especially new ones, might pattern-match "rational" as "using these rationality techniques" and conclude that they need to "goal factor" or "use trigger-action plans"
  • someone who gets rationality would simply apply that particular structure of thinking to their problem

In the case of a bike, we see hundreds of people biking around without training wheels, and so that becomes the obvious example from which we generalize the pattern of "bike". In other learning contexts, though, most people—including, sometimes, the people at the leading edge—are still in the early learning phases, so the training wheels are the rule, not the exception.

So people start thinking that the figurative bikes are supposed to have training wheels.

Incidentally, this can also be the grounds for strawman arguments where detractors of the thing say, "Look at these bikes [with training wheels]! How are you supposed to get anywhere on them?!"

Effective Altruism

We potentially see a similar effect with topics like Effective Altruism. It's a movement that is still in its infancy, which means that nobody has it all figured out. So when trying to answer "How do I be an effective altruist?" our pattern-matchers might pull up a bunch of examples of things that EA-identified people have been commonly observed to do.

  • donating 10% of one's income to a strategically selected charity
  • going to a coding bootcamp and switching careers, in order to Earn to Give
  • starting a new organization to serve an unmet need, or to serve a need more efficiently
  • supporting the Against Malaria Fund

...and this generated list might be helpful for various things, but be wary of thinking that it represents what Effective Altruism is. It's possible—it's almost inevitable—that we don't actually know what the most effective interventions are yet. We will potentially never actually know, but we can expect that in the future we will generally know more than at present. Which means that the current sampling of good EA behaviours likely does not actually even cluster around the ultimate set of behaviours we might expect.

Creating a new (platform for) culture

At my intentional community in Waterloo, we're building a new culture. But that's actually a by-product: our goal isn't to build this particular culture but to build a platform on which many cultures can be built. It's like how as a company you don't just want to be building the product but rather building the company itself, or "the machine that builds the product,” as Foursquare founder Dennis Crowley puts it.

What I started to notice though, is that we started to confused the particular, transitionary culture that we have at our house, with either (a) the particular, target culture, that we're aiming for, or (b) the more abstract range of cultures that will be constructable on our platform.

So from a training wheels perspective, we might totally eradicate words like "should". I did this! It was really helpful. But once I had removed the word from my idiolect, it became unhelpful to still be treating it as being a touchy word. Then I heard my mentor use it, and I remembered that the point of removing the word wasn't to not ever use it, but to train my brain to think without a particular structure that "should" represented.

This shows up on much larger scales too. Val from CFAR was talking about a particular kind of fierceness, "hellfire", that he sees as fundamental and important, and he noted that it seemed to be incompatible with the kind of culture my group is building. I initially agreed with him, which was kind of dissonant for my brain, but then I realized that hellfire was only incompatible with our training culture, not the entire set of cultures that could ultimately be built on our platform. That is, engaging with hellfire would potentially interfere with the learning process, but it's not ultimately proscribed by our culture platform.

Conscious cargo-culting

I think it might be helpful to repeat the definition:

Pattern-botching is you pattern-match a thing "X", as following a certain model, but then but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

It's kind of like if you were doing a cargo-cult, except you knew how airplanes worked.

(Cross-posted from malcolmocean.com)

If you could push a button to eliminate one cognitive bias, which would you choose?

3 malcolmocean 09 April 2015 07:05AM

I realize this question is contrived, but I figure it might provoke some fun discussion, so here goes:

If you could push a button and have your brain modified to precisely remove a cognitive bias (and have no other unnecessary effects—most convenient possible world), which would you choose? Why?

What if you were choosing for the whole human race?

Request for Intelligence Philosophy Essay Topic Suggestions

3 malcolmocean 13 March 2015 04:15AM

As part of a philosophy course I'm currently taking called Intelligence in Machines, Humans, and Other Animals, I have to write a <3000w essay on a topic related to intelligence. The description is here, but I've copied the important details below. I figured I might as well solicit suggestions for things to research. Realistically, I am likely to optimize the essay more for passing the course than for rigour though, so if you're expecting a very thorough review of something then you may be disappointed. But I suspect that it will still be at least an interesting jumping-off point.

Essay Topics: pick one from A, B, or C

A. Compare intelligence in machines, humans, and other animals with respect to one of the following topics. Feel free to narrow the topic down to some more specific issue, and to consider specific machines, animals, and human capacities.

You must pick a completely different topic from your first essay - I've kept track. For example, if you wrote on one kind of imagery, you can't write on another kind of imagery.

  1. Perception
  2. Imagery
  3. Problem solving
  4. Learning (I did this for my first essay)
  5. Analogy
  6. Emotion
  7. Consciousness
  8. Action
  9. Language
  10. Creativity
  11. The self

How to narrow down the topic

After choosing one of the 11 topics, you can narrow it down to particular aspects and entitites (human, computer, animal).

For example, you could narrow perception down to sound, the computer down to SIRI, and the animal down to dogs.

Imagery could be narrowed down to visual, auditory, etc.

Learning could be narrowed down to supervised or unsupervised, or to teaching.

Analogy could be narrowed down to intelligence test type analogies (A is to B as C is to what?).

Emotion could be narrowed down to empathy.

Etc.

Edited to add: Note that these are pretty squirrellable. E.g. Last time I took "Learning" and used it to talk about (recursive) self-improvement in machines and humans (planning to post this at some point). So feel free to propose something even if you only have a vague notion of how it would fit into one of the categories

One constraint: I need to be able to ask some sort of question and then produce evidence towards either side of it, i.e. it can't just be a review of the topic. But this too can be pretty vague; in my last essay I did "are humans or machines better suited for self-improvement?", concluding "humans for now, ultimately machines".

Announcing the Complice Less Wrong Study Hall

53 malcolmocean 02 March 2015 11:37PM

(If you're familiar with the backstory of the LWSH, you can skip to paragraph 5. If you just want the link to the chat, click here: LWSH on Complice)

The Less Wrong Study Hall was created as a tinychat room in March 2013, following Mqrius and ShannonFriedman's desire to create a virtual context for productivity. In retrospect, I think it's hilarious that a bunch of the comments ended up being a discussion of whether LW had the numbers to get a room that consistently had someone in it. The funny part is that they were based around the assumption that people would spend about 1h/day in it.

Once it was created, it was so effective that people started spending their entire day doing pomodoros (with 32minsWork+8minsBreak) in the LWSH and now often even stay logged in while doing chores away from their computers, just for cadence of focus and the sense of company. So there's almost always someone there, and often 5-10 people.

A week in, a call was put out for volunteers to program a replacement for the much-maligned tinychat. As it turns out though, video chat is a hard problem.

So nearly 2 years later, people are still using the tinychat.

But a few weeks ago, I discovered that you can embed the tinychat applet into an arbitrary page. I immediately set out to integrate LWSH into Complice, the productivity app I've been building for over a year, which counts many rationalists among its alpha & beta users.

The focal point of Complice is its today page, which consists of a list of everything you're planning to accomplish that day, colorized by goal. Plus a pomodoro timer. My habit for a long time has been to have this open next to LWSH. So what I basically did was integrate these two pages. On the left, you have a list of your own tasks. On the right, a list of other users in the room, with whatever task they're doing next. Then below all of that, the chatroom.

(Something important to note: I'm not planning to point existing Complice users, who may not be LWers, at the LW Study Hall. Any Complice user can create their own coworking room by going to complice.co/createroom)

With this integration, I've solved many of the core problems that people wanted addressed for the study hall:

  • an actual ding sound beyond people typing in the chat
  • synchronized pomodoro time visibility
  • pomos that automatically start, so breaks don't run over
  • Intentions — what am I working on this pomo?
  • a list of what other users are working on
  • the ability to show off how many pomos you've done
  • better welcoming & explanation of group norms

There are a couple other requested features that I can definitely solve but decided could come after this launch:

  • rooms with different pomodoro durations
  • member profiles
  • the ability to precommit to showing up at a certain time (maybe through Beeminder?!)

The following points were brought up in the Programming the LW Study Hall post or on the List of desired features on the github/nnmm/lwsh wiki, but can't be fixed without replacing tinychat:

  • efficient with respect to bandwidth and CPU
  • page layout with videos lined up down the left for use on the side of monitors
  • chat history
  • encryption
  • everything else that generally sucks about tinychat

It's also worth noting that if you were to think of the entirety of Complice as an addition to LWSH... well, it would definitely look like feature creep, but at any rate there would be several other notable improvements:

  • daily emails prompting you to decide what you're going to do that day
  • a historical record of what you've done, with guided weekly, monthly, and yearly reviews
  • optional accountability partner who gets emails with what you've done every day (the LWSH might be a great place to find partners!)
So, if you haven't clicked the link already, check out: complice.co/room/lesswrong

(This article posted to Main because that's where the rest of the LWSH posts are, and this represents a substantial update.)

[Link] X-Risk: NASA Study Concludes When Civilization Will End (a couple decades)

0 malcolmocean 20 March 2014 06:45AM

http://www.policymic.com/articles/85541/nasa-study-concludes-when-civilization-will-end-and-it-s-not-looking-good-for-us

There are a lot of people here concerned with existential risk. We tend to talk mostly about AI, but there are some other concerns that are pretty huge, and without which we might not have a chance to get to any kind of AGI. I thought I would post this here in hopes of sparking a discussion about (a) the quality of the study in question and (b) what we might do if we conclude that it's worth doing something about.

My take: I'm currently taking a university course that talks about modelling energy usage, and our  similar. The NASA study is strongly related to a concept called EROEI, or "energy returned on energy invested". Back in the early oil days, we could invest 1 barrel of oil and get 100 back. Now we can get 3-10 back, depending on the source. The impact of this is key: when high-EROEI resources exist, more of the population can do non-energy-generating things. When EROEI is close to 1, everyone has to farm: virtually all energy needs to go back into harvesting energy.

The concern about something like peak oil isn't necessarily in the amount of oil but in the rate that we can procure it. At a certain stage, all of our infrastructure requires more energy than we can find, just to operate and maintain.

More information on EROEI can be found in chapter 3 of Searching for a Miracle.

View more: Next