Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A Year of Spaced Repetition Software in the Classroom

91 tanagrabeast 04 July 2015 10:30PM

Last year, I asked LW for some advice about spaced repetition software (SRS) that might be useful to me as a high school teacher. With said advice came a request to write a follow-up after I had accumulated some experience using SRS in the classroom. This is my report.

Please note that this was not a scientific experiment to determine whether SRS "works." Prior studies are already pretty convincing on this point and I couldn't think of a practical way to run a control group or "blind" myself. What follows is more of an informal debriefing for how I used SRS during the 2014-15 school year, my insights for others who might want to try it, and how the experience is changing how I teach.

Summary

SRS can raise student achievement even with students who won't use the software on their own, and even with frequent disruptions to the study schedule. Gains are most apparent with the already high-performing students, but are also meaningful for the lowest students. Deliberate efforts are needed to get student buy-in, and getting the most out of SRS may require changes in course design.

The software

After looking into various programs, including the game-like Memrise, and even writing my own simple SRS, I ultimately went with Anki for its multi-platform availability, cloud sync, and ease-of-use. I also wanted a program that could act as an impromptu catch-all bin for the 2,000+ cards I would be producing on the fly throughout the year. (Memrise, in contrast, really needs clearly defined units packaged in advance).

The students

I teach 9th and 10th grade English at an above-average suburban American public high school in a below-average state. Mine are the lower "required level" students at a school with high enrollment in honors and Advanced Placement classes. Generally speaking, this means my students are mostly not self-motivated, are only very weakly motivated by grades, and will not do anything school-related outside of class no matter how much it would be in their interest to do so. There are, of course, plenty of exceptions, and my students span an extremely wide range of ability and apathy levels.

The procedure

First, what I did not do. I did not make Anki decks, assign them to my students to study independently, and then quiz them on the content. With honors classes I taught in previous years I think that might have worked, but I know my current students too well. Only about 10% of them would have done it, and the rest would have blamed me for their failing grades—with some justification, in my opinion.

Instead, we did Anki together, as a class, nearly every day.

As initial setup, I created a separate Anki profile for each class period. With a third-party add-on for Anki called Zoom, I enlarged the display font sizes to be clearly legible on the interactive whiteboard at the front of my room.

Nightly, I wrote up cards to reinforce new material and integrated them into the deck in time for the next day's classes. This averaged about 7 new cards per lesson period.These cards came in many varieties, but the three main types were:

  1. concepts and terms, often with reversed companion cards, sometimes supplemented with "what is this an example of" scenario cards.
  2. vocabulary, 3 cards per word: word/def, reverse, and fill-in-the-blank example sentence
  3. grammar, usually in the form of "What change(s), if any, does this sentence need?" Alternative cards had different permutations of the sentence.

Weekly, I updated the deck to the cloud for self-motivated students wishing to study on their own.

Daily, I led each class in an Anki review of new and due cards for an average of 8 minutes per study day, usually as our first activity, at a rate of about 3.5 cards per minute. As each card appeared on the interactive whiteboard, I would read it out loud while students willing to share the answer raised their hands. Depending on the card, I might offer additional time to think before calling on someone to answer. Depending on their answer, and my impressions of the class as a whole, I might elaborate or offer some reminders, mnemonics, etc. I would then quickly poll the class on how they felt about the card by having them show a color by way of a small piece of card-stock divided into green, red, yellow, and white quadrants. Based on my own judgment (informed only partly by the poll), I would choose and press a response button in Anki, determining when we should see that card again.

End-of-year summary for one of my classes

[Data shown is from one of my five classes. We didn't start using Anki until a couple weeks into the school year.]

Opportunity costs

8 minutes is a significant portion of a 55 minute class period, especially for a teacher like me who fills every one of those minutes. Something had to give. For me, I entirely cut some varieties of written vocab reinforcement, and reduced the time we spent playing the team-based vocab/term review game I wrote for our interactive whiteboards some years ago. To a lesser extent, I also cut back on some oral reading comprehension spot-checks that accompany my whole-class reading sessions. On balance, I think Anki was a much better way to spend the time, but it's complicated. Keep reading.

Whole-class SRS not ideal

Every student is different, and would get the most out of having a personal Anki profile determine when they should see each card. Also, most individuals could study many more cards per minute on their own than we averaged doing it together. (To be fair, a small handful of my students did use the software independently, judging from Ankiweb download stats)

Getting student buy-in

Before we started using SRS I tried to sell my students on it with a heartfelt, over-prepared 20 minute presentation on how it works and the superpowers to be gained from it. It might have been a waste of time. It might have changed someone's life. Hard to say.

As for the daily class review, I induced engagement partly through participation points that were part of the final semester grade, and which students knew I tracked closely. Raising a hand could earn a kind of bonus currency, but was never required—unlike looking up front and showing colors during polls, which I insisted on. When I thought students were just reflexively holding up the same color and zoning out, I would sometimes spot check them on the last card we did and penalize them if warranted.

But because I know my students are not strongly motivated by grades, I think the most important influence was my attitude. I made it a point to really turn up the charm during review and play the part of the engaging game show host. Positive feedback. Coaxing out the lurkers. Keeping that energy up. Being ready to kill and joke about bad cards. Reminding classes how awesome they did on tests and assignments because they knew their Anki stuff.

(This is a good time to point out that the average review time per class period stabilized at about 8 minutes because I tried to end reviews before student engagement tapered off too much, which typically started happening at around the 6-7 minute mark. Occasional short end-of-class reviews mostly account for the difference.)

I also got my students more on the Anki bandwagon by showing them how this was directly linked reduced note-taking requirements. If I could trust that they would remember something through Anki alone, why waste time waiting for them to write it down? They were unlikely to study from those notes anyway. And if they aren't looking down at their paper, they'll be paying more attention to me. I better come up with more cool things to tell them!

Making memories

Everything I had read about spaced repetition suggested it was a great reinforcement tool but not a good way to introduce new material. With that in mind, I tried hard to find or create memorable images, examples, mnemonics, and anecdotes that my Anki cards could become hooks for, and to get those cards into circulation as soon as possible. I even gave this method a mantra: "vivid memory, card ready".

When a student during review raised their hand, gave me a pained look, and said, "like that time when...." or "I can see that picture of..." as they struggled to remember, I knew I had done well. (And I would always wait a moment, because they would usually get it.)

Baby cards need immediate love

Unfortunately, if the card wasn't introduced quickly enough—within a day or two of the lesson—the entire memory often vanished and had to be recreated, killing the momentum of our review. This happened far too often—not because I didn't write the card soon enough (I stayed really on top of that), but because it didn't always come up for study soon enough. There were a few reasons for this:

  1. We often had too many due cards to get through in one session, and by default Anki puts new cards behind due ones.
  2. By default, Anki only introduces 20 new cards in one session (I soon uncapped this).
  3. Some cards were in categories that I gave lower priority to.

Two obvious cures for this problem:

  1. Make fewer cards. (I did get more selective as the year went on.)
  2. Have all cards prepped ahead of time and introduce new ones at the end of the class period they go with. (For practical reasons, not the least of which was the fact that I didn't always know what cards I was making until after the lesson, I did not do this. I might able to next year.)

Days off suck

SRS is meant to be used every day. When you take weekends off, you get a backlog of due cards. Not only do my students take every weekend and major holiday off (slackers), they have a few 1-2 week vacations built into the calendar. Coming back from a week's vacation means a 9-day backlog (due to the weekends bookending it). There's no good workaround for students that won't study on their own. The best I could do was run longer or multiple Anki sessions on return days to try catch up with the backlog. It wasn't enough. The "caught up" condition was not normal for most classes at most points during the year, but rather something to aspire to and occasionally applaud ourselves for reaching. Some cards spent weeks or months on the bottom of the stack. Memories died. Baby cards emerged stillborn. Learning was lost.

Needless to say, the last weeks of the school year also had a certain silliness to them. When the class will never see the card again, it doesn't matter whether I push the button that says 11 days or the one that says 8 months. (So I reduced polling and accelerated our cards/minute rate.)

Never before SRS did I fully appreciate the loss of learning that must happen every summer break.

Triage

I kept each course's master deck divided into a few large subdecks. This was initially for organizational reasons, but I eventually started using it as a prioritizing tool. This happened after a curse-worthy discovery: if you tell Anki to review a deck made from subdecks, due cards from subdecks higher up in the stack are shown before cards from decks listed below, no matter how overdue they might be. From that point, on days when we were backlogged (most days) I would specifically review the concept/terminology subdeck for the current semester before any other subdecks, as these were my highest priority.

On a couple of occasions, I also used Anki's study deck tools to create temporary decks of especially high-priority cards.

Seizing those moments

Veteran teachers start acquiring a sense of when it might be a good time to go off book and teach something that isn't in the unit, and maybe not even in the curriculum. Maybe it's teaching exactly the right word to describe a vivid situation you're reading about, or maybe it's advice on what to do in a certain type of emergency that nearly happened. As the year progressed, I found myself humoring my instincts more often because of a new confidence that I can turn an impressionable moment into a strong memory and lock it down with a new Anki card. I don't even care if it will ever be on a test. This insight has me questioning a great deal of what I thought knew about organizing a curriculum. And I like it.

A lifeline for low performers

An accidental discovery came from having written some cards that were, it was immediately obvious to me, much too easy. I was embarrassed to even be reading them out loud. Then I saw which hands were coming up.

In any class you'll get some small number of extremely low performers who never seem to be doing anything that we're doing, and, when confronted, deny that they have any ability whatsoever. Some of the hands I was seeing were attached to these students. And you better believe I called on them.

It turns out that easy cards are really important because they can give wins to students who desperately need them. Knowing a 6th grade level card in a 10th grade class is no great achievement, of course, but the action takes what had been negative morale and nudges it upward. And it can trend. I can build on it. A few of these students started making Anki the thing they did in class, even if they ignored everything else. I can confidently name one student I'm sure passed my class only because of Anki. Don't get me wrong—he just barely passed. Most cards remained over his head. Anki was no miracle cure here, but it gave him and I something to work with that we didn't have when he failed my class the year before.

A springboard for high achievers

It's not even fair. The lowest students got something important out of Anki, but the highest achievers drank it up and used it for rocket fuel. When people ask who's widening the achievement gap, I guess I get to raise my hand now.

I refuse to feel bad for this. Smart kids are badly underserved in American public schools thanks to policies that encourage staff to focus on that slice of students near (but not at) the bottom—the ones who might just barely be able to pass the state test, given enough attention.

Where my bright students might have been used to high Bs and low As on tests, they were now breaking my scales. You could see it in the multiple choice, but it was most obvious in their writing: they were skillfully working in terminology at an unprecedented rate, and making way more attempts to use new vocabulary—attempts that were, for the most part, successful.

Given the seemingly objective nature of Anki it might seem counterintuitive that the benefits would be more obvious in writing than in multiple choice, but it actually makes sense when I consider that even without SRS these students probably would have known the terms and the vocab well enough to get multiple choice questions right, but might have lacked the confidence to use them on their own initiative. Anki gave them that extra confidence.

A wash for the apathetic middle?

I'm confident that about a third of my students got very little out of our Anki review. They were either really good at faking involvement while they zoned out, or didn't even try to pretend and just took the hit to their participation grade day after day, no matter what I did or who I contacted.

These weren't even necessarily failing students—just the apathetic middle that's smart enough to remember some fraction of what they hear and regurgitate some fraction of that at the appropriate times. Review of any kind holds no interest for them. It's a rerun. They don't really know the material, but they tell themselves that they do, and they don't care if they're wrong.

On the one hand, these students are no worse off with Anki than they would have been with with the activities it replaced, and nobody cries when average kids get average grades. On the other hand, I'm not ok with this... but so far I don't like any of my ideas for what to do about it.

Putting up numbers: a case study

For unplanned reasons, I taught a unit at the start of a quarter that I didn't formally test them on until the end of said quarter. Historically, this would have been a disaster. In this case, it worked out well. For five weeks, Anki was the only ongoing exposure they were getting to that unit, but it proved to be enough. Because I had given the same test as a pre-test early in the unit, I have some numbers to back it up. The test was all multiple choice, with two sections: the first was on general terminology and concepts related to the unit. The second was a much harder reading comprehension section.

As expected, scores did not go up much on the reading comprehension section. Overall reading levels are very difficult to boost in the short term and I would not expect any one unit or quarter to make a significant difference. The average score there rose by 4 percentage points, from 48 to 52%.

Scores in the terminology and concept section were more encouraging. For material we had not covered until after the pre-test, the average score rose by 22 percentage points, from 53 to 75%. No surprise there either, though; it's hard to say how much credit we should give to SRS for that.

But there were also a number of questions about material we had already covered before the pretest. Being the earliest material, I might have expected some degradation in performance on the second test. Instead, the already strong average score in that section rose by an additional 3 percentage points, from 82 to 85%. (These numbers are less reliable because of the smaller number of questions, but they tell me Anki at least "locked in" the older knowledge, and may have strengthened it.)

Some other time, I might try reserving a section of content that I teach before the pre-test but don't make any Anki cards for. This would give me a way to compare Anki to an alternative review exercise.

What about formal standardized tests?

I don't know yet. The scores aren't back. I'll probably be shown some "value added" analysis numbers at some point that tell me whether my students beat expectations, but I don't know how much that will tell me. My students were consistently beating expectations before Anki, and the state gave an entirely different test this year because of legislative changes. I'll go back and revise this paragraph if I learn anything useful.

Those discussions...

If I'm trying to acquire a new skill, one of the first things I try to do is listen to skilled practitioners of that skill talk about it to each other. What are the terms-of-art? How do they use them? What does this tell me about how they see their craft? Their shorthand is a treasure trove of crystallized concepts; once I can use it the same way they do, I find I'm working at a level of abstraction much closer to theirs.

Similarly, I was hoping Anki could help make my students more fluent in the subject-specific lexicon that helps you score well in analytical essays. After introducing a new term and making the Anki card for it, I made extra efforts to use it conversationally. I used to shy away from that because so many students would have forgotten it immediately and tuned me out for not making any sense. Not this year. Once we'd seen the card, I used the term freely, with only the occasional reminder of what it meant. I started using multiple terms in the same sentence. I started talking about writing and analysis the way my fellow experts do, and so invited them into that world.

Even though I was already seeing written evidence that some of my high performers had assimilated the lexicon, the high quality discussions of these same students caught me off guard. You see, I usually dread whole-class discussions with non-honors classes because good comments are so rare that I end up dejectedly spouting all the insights I had hoped they could find. But by the end of the year, my students had stepped up.

I think what happened here was, as with the writing, as much a boost in confidence as a boost in fluency. Whatever it was, they got into some good discussions where they used the terminology and built on it to say smarter stuff.

Don't get me wrong. Most of my students never got to that point. But on average even small groups without smart kids had a noticeably higher level of discourse than I am used to hearing when I break up the class for smaller discussions.

Limitations

SRS is inherently weak when it comes to the abstract and complex. No card I've devised enables a student to develop a distinctive authorial voice, or write essay openings that reveal just enough to make the reader curious. Yes, you can make cards about strategies for this sort of thing, but these were consistently my worst cards—the overly difficult "leeches" that I eventually suspended from my decks.

A less obvious limitation of SRS is that students with a very strong grasp of a concept often fail to apply that knowledge in more authentic situations. For instance, they may know perfectly well the difference between "there", "their", and "they're", but never pause to think carefully about whether they're using the right one in a sentence. I am very open to suggestions about how I might train my students' autonomous "System 1" brains to have "interrupts" for that sort of thing... or even just a reflex to go back and check after finishing a draft.

Moving forward

I absolutely intend to continue using SRS in the classroom. Here's what I intend to do differently this coming school year:

  • Reduce the number of cards by about 20%, to maybe 850-950 for the year in a given course, mostly by reducing the number of variations on some overexposed concepts.
  • Be more willing to add extra Anki study sessions to stay better caught-up with the deck, even if this means my lesson content doesn't line up with class periods as neatly.
  • Be more willing to press the red button on cards we need to re-learn. I think I was too hesitant here because we were rarely caught up as it was.
  • Rework underperforming cards to be simpler and more fun.
  • Use more simple cloze deletion cards. I only had a few of these, but they worked better than I expected for structured idea sets like, "characteristics of a tragic hero".
  • Take a less linear and more opportunistic approach to introducing terms and concepts.
  • Allow for more impromptu discussions where we bring up older concepts in relevant situations and build on them.
  • Shape more of my lessons around the "vivid memory, card ready" philosophy.
  • Continue to reduce needless student note-taking.
  • Keep a close eye on 10th grade students who had me for 9th grade last year. I wonder how much they retained over the summer, and I can't wait to see what a second year of SRS will do for them.

Suggestions and comments very welcome!

Experiences in applying "The Biodeterminist's Guide to Parenting"

60 juliawise 17 July 2015 07:19PM

I'm posting this because LessWrong was very influential on how I viewed parenting, particularly the emphasis on helping one's brain work better. In this context, creating and influencing another person's brain is an awesome responsibility.


It turned out to be a lot more anxiety-provoking than I expected. I don't think that's necessarily a bad thing, as the possibility of screwing up someone's brain should make a parent anxious, but it's something to be aware of. I've heard some blithe "Rational parenting could be a very high-impact activity!" statements from childless LWers who may be interested to hear some experiences in actually applying that.


One thing that really scared me about trying to raise a child with the healthiest-possible brain and body was the possibility that I might not love her if she turned out to not be smart. 15 months in, I'm no longer worried. Evolution has been very successful at producing parents and children that love each other despite their flaws, and our family is no exception. Our daughter Lily seems to be doing fine, but if she turns out to have disabilities or other problems, I'm confident that we'll roll with the punches.

 

Cross-posted from The Whole Sky.

 


Before I got pregnant, I read Scott Alexander's (Yvain's) excellent Biodeterminist's Guide to Parenting and was so excited to have this knowledge. I thought how lucky my child would be to have parents who knew and cared about how to protect her from things that would damage her brain.

Real life, of course, got more complicated. It's one thing to intend to avoid neurotoxins, but another to arrive at the grandparents' house and find they've just had ant poison sprayed. What do you do then?


Here are some tradeoffs Jeff and I have made between things that are good for children in one way but bad in another, or things that are good for children but really difficult or expensive.


Germs and parasites


The hygiene hypothesis states that lack of exposure to germs and parasites increases risk of auto-immune disease. Our pediatrician recommended letting Lily playing in the dirt for this reason.


While exposure to animal dander and pollution increase asthma later in life, it seems that being exposed to these in the first year of life actually protects against asthma. Apparently if you're going to live in a house with roaches, you should do it in the first year or not at all.


Except some stuff in dirt is actually bad for you.


Scott writes:

Parasite-infestedness of an area correlates with national IQ at about r = -0.82. The same is true of US states, with a slightly reduced correlation coefficient of -0.67 (p<0.0001). . . . When an area eliminates parasites (like the US did for malaria and hookworm in the early 1900s) the IQ for the area goes up at about the right time.


Living with cats as a child seems to increase risk of schizophrenia, apparently via toxoplasmosis. But in order to catch toxoplasmosis from a cat, you have to eat its feces during the two weeks after it first becomes infected (which it’s most likely to do by eating birds or rodents carrying the disease). This makes me guess that most kids get it through tasting a handful of cat litter, dirt from the yard, or sand from the sandbox rather than simply through cat ownership. We live with indoor cats who don’t seem to be mousers, so I’m not concerned about them giving anyone toxoplasmosis. If we build Lily a sandbox, we’ll keep it covered when not in use.


The evidence is mixed about whether infections like colds during the first year of life increase or decrease your risk of asthma later. After the newborn period, we defaulted to being pretty casual about germ exposure.


Toxins in buildings


Our experiences with lead. Our experiences with mercury.


In some areas, it’s not that feasible to live in a house with zero lead. We live in Boston, where 87% of the housing was built before lead paint was banned. Even in a new building, we’d need to go far out of town before reaching soil that wasn’t near where a lead-painted building had been.


It is possible to do some renovations without exposing kids to lead. Jeff recently did some demolition of walls with lead paint, very carefully sealed off and cleaned up, while Lily and I spent the day elsewhere. Afterwards her lead level was no higher than it had been.


But Jeff got serious lead poisoning as a toddler while his parents did major renovations on their old house. If I didn’t think I could keep the child away from the dust, I wouldn’t renovate.


Recently a house across the street from us was gutted, with workers throwing debris out the windows and creating big plumes of dust (presumable lead-laden) that blew all down the street. Later I realized I should have called city building inspection services, which would have at least made them carry the debris into the dumpster instead of throwing it from the second story.


Floor varnish releases formaldehyde and other nasties as it cures. We kept Lily out of the house for a few weeks after Jeff redid the floors. We found it worthwhile to pay rent at our previous house in order to not have to live in the new house while this kind of work was happening.

 

Pressure-treated wood was treated with arsenic and chromium until around 2004 in the US. It has a greenish tint, though this may have faded with time. Playing on playsets or decks made of such wood increases children's cancer risk. It should not be used for furniture (I thought this would be obvious, but apparently it wasn't to some of my handyman relatives).


I found it difficult to know how to deal with fresh paint and other fumes in my building at work while I was pregnant. Women of reproductive age have a heightened sense of smell, and many pregnant women have heightened aversion to smells, so you can literally smell things some of your coworkers can’t (or don’t mind). The most critical period of development is during the first trimester, when most women aren’t telling the world they’re pregnant (because it’s also the time when a miscarriage is most likely, and if you do lose the pregnancy you might not want to have to tell the world). During that period, I found it difficult to explain why I was concerned about the fumes from the roofing adhesive being used in our building. I didn’t want to seem like a princess who thought she was too good to work in conditions that everybody else found acceptable. (After I told them I was pregnant, my coworkers were very understanding about such things.)


Food


Recommendations usually focus on what you should eat during pregnancy, but obviously children’s brain development doesn’t stop there. I’ve opted to take precautions with the food Lily and I eat for as long as I’m nursing her.


Claims that pesticide residues are poisoning children scare me, although most scientists seem to think the paper cited is overblown. Other sources say the levels of pesticides in conventionally grown produce are fine. We buy organic produce at home but eat whatever we’re served elsewhere.


I would love to see a study with families randomly selected to receive organic produce for the first 8 years of the kids’ lives, then looking at IQ and hyperactivity. But no one’s going to do that study because of how expensive 8 years of organic produce would be.
The Biodeterminist’s Guide doesn’t mention PCBs in the section on fish, but fish (particularly farmed salmon) are a major source of these pollutants. They don’t seem to be as bad as mercury, but are neurotoxic. Unfortunately their half-life in the body is around 14 years, so if you have even a vague idea of getting pregnant ever in your life you shouldn’t be eating farmed salmon (or Atlantic/farmed salmon, bluefish, wild striped bass, white and Atlantic croaker, blackback or winter flounder, summer flounder, or blue crab).


I had the best intentions of eating lots of the right kind of high-omega-3, low-pollutant fish during and after pregnancy. Unfortunately, fish was the only food I developed an aversion to. Now that Lily is eating food on her own, we tried several sources of omega-3 and found that kippered herring was the only success. Lesson: it’s hard to predict what foods kids will eat, so keep trying.


In terms of hassle, I underestimated how long I would be “eating for two” in the sense that anything I put in my body ends up in my child’s body. Counting pre-pregnancy (because mercury has a half-life of around 50 days in the body, so sushi you eat before getting pregnant could still affect your child), pregnancy, breastfeeding, and presuming a second pregnancy, I’ll probably spend about 5 solid years feeding another person via my body, sometimes two children at once. That’s a long time in which you have to consider the effect of every medication, every cup of coffee, every glass of wine on your child. There are hardly any medications considered completely safe during pregnancy and lactationmost things are in Category C, meaning there’s some evidence from animal trials that they may be bad for human children.


Fluoride


Too much fluoride is bad for children’s brains. The CDC recently recommended lowering fluoride levels in municipal water (though apparently because of concerns about tooth discoloration more than neurotoxicity). Around the same time, the American Dental Association began recommending the use of fluoride toothpaste as soon as babies have teeth, rather than waiting until they can rinse and spit.


Cavities are actually a serious problem even in baby teeth, because of the pain and possible infection they cause children. Pulling them messes up the alignment of adult teeth. Drilling on children too young to hold still requires full anesthesia, which is dangerous itself.


But Lily isn’t particularly at risk for cavities. 20% of children get a cavity by age six, and they are disproportionately poor, African-American, and particularly Mexican-American children (presumably because of different diet and less ability to afford dentists). 75% of cavities in children under 5 occur in 8% of the population.


We decided to have Lily brush without toothpaste, avoid juice and other sugary drinks, and see the dentist regularly.


Home pesticides


One of the most commonly applied insecticides makes kids less smart. This isn’t too surprising, given that it kills insects by disabling their nervous system. But it’s not something you can observe on a small scale, so it’s not surprising that the exterminator I talked to brushed off my questions with “I’ve never heard of a problem!”


If you get carpenter ants in your house, you basically have to choose between poisoning them or letting them structurally damage the house. We’ve only seen a few so far, but if the problem progresses, we plan to:

1) remove any rotting wood in the yard where they could be nesting

2) have the perimeter of the building sprayed

3) place gel bait in areas kids can’t access

4) only then spray poison inside the house.


If we have mice we’ll plan to use mechanical traps rather than poison.


Flame retardants


Since the 1970s, California required a high degree of flame-resistance from furniture. This basically meant that US manufacturers sprayed flame retardant chemicals on anything made of polyurethane foam, such as sofas, rug pads, nursing pillows, and baby mattresses.

The law recently changed, due to growing acknowledgement that the carcinogenic and neurotoxic chemicals were more dangerous than the fires they were supposed to be preventing. Even firefighters opposed the use of the flame retardants, because when people die in fires it’s usually from smoke inhalation rather than burns, and firefighters don’t want to breathe the smoke from your toxic sofa (which will eventually catch fire even with the flame retardants).


We’ve opted to use furniture from companies that have stopped using flame retardants (like Ikea and others listed here). Apparently futons are okay if they’re stuffed with cotton rather than foam. We also have some pre-1970s furniture that tested clean for flame retardants. You can get foam samples tested for free.


The main vehicle for children ingesting the flame retardants is that it settles into dust on the floor, and children crawl around in the dust. If you don’t want to get rid of your furniture, frequent damp-mopping would probably help.


The standards for mattresses are so stringent that the chemical sprays aren’t generally used, and instead most mattresses are wrapped in a flame-resistant barrier which apparently isn’t toxic. I contacted the companies that made our mattresses and they’re fine.


Ratings for chemical safety of children’s car seats here.


Thoughts on IQ


A lot of people, when I start talking like this, say things like “Well, I lived in a house with lead paint/played with mercury/etc. and I’m still alive.” And yes, I played with mercury as a child, and Jeff is still one of the smartest people I know even after getting acute lead poisoning as a child.

But I do wonder if my mind would work a little better without the mercury exposure, and if Jeff would have had an easier time in school without the hyperactivity (a symptom of lead exposure). Given the choice between a brain that works a little better and one that works a little worse, who wouldn’t choose the one that works better?


We’ll never know how an individual’s nervous system might have been different with a different childhood. But we can see population-level effects. The Environmental Protection Agency, for example, is fine with calculating the expected benefit of making coal plants stop releasing mercury by looking at the expected gains in terms of children’s IQ and increased earnings.


Scott writes:

A 15 to 20 point rise in IQ, which is a little more than you get from supplementing iodine in an iodine-deficient region, is associated with half the chance of living in poverty, going to prison, or being on welfare, and with only one-fifth the chance of dropping out of high-school (“associated with” does not mean “causes”).


Salkever concludes that for each lost IQ point, males experience a 1.93% decrease in lifetime earnings and females experience a 3.23% decrease. If Lily would earn about what I do, saving her one IQ point would save her $1600 a year or $64000 over her career. (And that’s not counting the other benefits she and others will reap from her having a better-functioning mind!) I use that for perspective when making decisions. $64000 would buy a lot of the posh prenatal vitamins that actually contain iodine, or organic food, or alternate housing while we’re fixing up the new house.


Conclusion


There are times when Jeff and I prioritize social relationships over protecting Lily from everything that might harm her physical development. It’s awkward to refuse to go to someone’s house because of the chemicals they use, or to refuse to eat food we’re offered. Social interactions are good for children’s development, and we value those as well as physical safety. And there are times when I’ve had to stop being so careful because I was getting paralyzed by anxiety (literally perched in the rocker with the baby trying not to touch anything after my in-laws scraped lead paint off the outside of the house).


But we also prioritize neurological development more than most parents, and we hope that will have good outcomes for Lily.

We Should Introduce Ourselves Differently

53 NancyLebovitz 18 May 2015 08:48PM

I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacythe affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.

While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.

Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."

Top 9+2 myths about AI risk

43 Stuart_Armstrong 29 June 2015 08:41PM

Following some somewhat misleading articles quoting me, I thought Id present the top 10 myths about the AI risk thesis:

  1. That we’re certain AI will doom us. Certainly not. It’s very hard to be certain of anything involving a technology that doesn’t exist; we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it.
  2. That humanity will survive, because we’ve always survived before. Many groups of humans haven’t survived contact with more powerful intelligent agents. In the past, those agents were other humans; but they need not be. The universe does not owe us a destiny. In the future, something will survive; it need not be us.
  3. That uncertainty means that you’re safe. If you’re claiming that AI is impossible, or that it will take countless decades, or that it’ll be safe... you’re not being uncertain, you’re being extremely specific about the future. “No AI risk” is certain; “Possible AI risk” is where we stand.
  4. That Terminator robots will be involved. Please? The threat from AI comes from its potential intelligence, not from its ability to clank around slowly with an Austrian accent.
  5. That we’re assuming the AI is too dumb to know what we’re asking it. No. A powerful AI will know what we meant to program it to do. But why should it care? And if we could figure out how to program “care about what we meant to ask”, well, then we’d have safe AI.
  6. That there’s one simple trick that can solve the whole problem. Many people have proposed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them reduce the risk enough to relax – and many of the tricks contradict each other (you can’t design an AI that’s both a tool and socialising with humans!).
  7. That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.
  8. That AIs will be more intelligent than us, hence more moral. It’s pretty clear than in humans, high intelligence is no guarantee of morality. Are you really willing to bet the whole future of humanity on the idea that AIs might be different? That in the billions of possible minds out there, there is none that is both dangerous and very intelligent?
  9. That science fiction or spiritual ideas are useful ways of understanding AI risk. Science fiction and spirituality are full of human concepts, created by humans, for humans, to communicate human ideas. They need not apply to AI at all, as these could be minds far removed from human concepts, possibly without a body, possibly with no emotions or consciousness, possibly with many new emotions and a different type of consciousness, etc... Anthropomorphising the AIs could lead us completely astray.
Lists cannot be comprehensive, but they can adapt and grow, adding more important points:
  1. That AIs have to be evil to be dangerous. The majority of the risk comes from indifferent or partially nice AIs. Those that have some goal to follow, with humanity and its desires just getting in the way – using resources, trying to oppose it, or just not being perfectly efficient for its goal.
  2. That we believe AI is coming soon. It might; it might not. Even if AI is known to be in the distant future (which isn't known, currently), some of the groundwork is worth laying now.

 

Wear a Helmet While Driving a Car

37 James_Miller 30 July 2015 04:36PM

A 2006 study showed that “280,000 people in the U.S. receive a motor vehicle induced traumatic brain injury every year” so you would think that wearing a helmet while driving would be commonplace.  Race car drivers wear helmets.  But since almost no one wears a helmet while driving a regular car, you probably fear that if you wore one you would look silly, attract the notice of the police for driving while weird, or the attention of another driver who took your safety attire as a challenge.  (Car drivers are more likely to hit bicyclists who wear helmets.)  

 

The $30+shipping Crasche hat is designed for people who should wear a helmet but don’t.  It looks like a ski cap, but contains concealed lightweight protective material.  People who have signed up for cryonics, such as myself, would get an especially high expected benefit from using a driving helmet because we very much want our brains to “survive” even a “fatal” crash. I have been using a Crasche hat for about a week.

Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time

37 CellBioGuy 26 July 2015 07:38AM

This is the first in a series of posts I am putting together on a personal blog I just started two days ago as a collection of my musings on astrobiology ("The Great A'Tuin" - sorry, I couldn't help it), and will be reposting here.  Much has been written here about the Fermi paradox and the 'great filter'.   It seems to me that going back to a somewhat more basic level of astronomy and astrobiology is extremely informative to these questions, and so this is what I will be doing.  The bloggery is intended for a slightly more general audience than this site (hence much of the content of the introduction) but I think it will be of interest.  Many of the points I will be making are ones I have touched on in previous comments here, but hope to explore in more detail.

This post is a combined version of my first two posts - an introduction, and a discussion of our apparent position in space and time in the universe.  The blog posts may be found at:

http://thegreatatuin.blogspot.com/2015/07/whats-all-this-about.html

http://thegreatatuin.blogspot.com/2015/07/space-and-time.html

Text reproduced below.

 

 



What's all this about?


This blog is to be a repository for the thoughts and analysis I've accrued over the years on the topic of astrobiology, and the place of life and intelligence in the universe.  All my life I've been pulled to the very large and the very small.  Life has always struck me as the single most interesting thing on Earth, with its incredibly fine structure and vast, amazing history and fantastic abilities.  At the same time, the vast majority of what exists is NOT on Earth.  Going up in size from human-scale by the same number of orders of magnitude as you go down through to get to a hydrogen atom, you get just about to Venus at its closest approach to Earth - or one billionth the distance to the nearest star.  The large is much larger than the small is small.  On top of this, we now know that the universe as we know it is much older than life on Earth.  And we know so little of the vast majority of the universe.

There's a strong tendency towards specialization in the sciences.  These days, there pretty much has to be for anybody to get anywhere.  Much of the great foundational work of physics was done on tabletops, and the law of gravitation was derived from data on the motions of the planets taken without the benefit of so much as a telescope.  All the low-hanging fruit has been picked.  To continue to further knowledge of the universe, huge instruments and vast energies are put to bear in astronomy and physics.  Biology is arguably a bit different, but the very complexity that makes living systems so successful and so fascinating to study means that there is so much to study that any one person is often only looking at a very small problem.

This has distinct drawbacks.  The universe does not care for our abstract labels of fields and disciplines - it simply is, at all scales simultaneously at all times and in all places.  When people focus narrowly on their subject of interest, it can prevent them from realizing the implications of their findings on problems usually considered a different field.

It is one of my hopes to try to bridge some gaps between biology and astronomy here.  I very nearly double-majored in biology and astronomy in college; the only thing that prevented this (leading to an astronomy minor) was a bad attitude towards calculus.  As is, I am a graduate student studying basic cell biology at a major research university, who nonetheless keeps in touch with a number of astronomer friends and keeps up with the field as much as possible.  I quite often find that what I hear and read about has strong implications for questions of life elsewhere in the universe, but see so few of these implications actually get publicly discussed. All kinds of information shedding light on our position in space and time, the origins of life, the habitability of large chunks of the universe, the course that biospheres take, and the possible trajectories of intelligences seem to me to be out there unremarked.

It is another of my hopes to try, as much as is humanly possible, to take a step back from the usual narratives about extraterrestrial life and instead focus from something closer to first principles.  What we actually have observed and have not, what we can observe and what we cannot, and what this leaves open, likely, or unlikely.  In my study of the history of the ideas of extraterrestrial life and extraterrestrial intelligence, all too often these take a back seat to popular narratives of the day.  In the 16th century the notion that the Earth moved in a similar way to the planets gained currency and lead to the suppositions that they might be made of similar stuff and that the planets might even be inhabited.  The hot question was, of course, if their inhabitants would be Christians and their relationship with God given the anthropocentric biblical creation stories.  In the late 19th and early 20th century, Lowell's illusory canals on Mars were advanced as evidence for a Martian socialist utopia.  In the 1970s, Carl Sagan waxed philosophical on the notion that contacting old civilizations might teach us how to save ourselves from nuclear warfare.  Today, many people focus on the Fermi paradox - the apparent contradiction that since much of the universe is quite old, extraterrestrials experiencing continuing technological progress and growth should have colonized and remade it in their image long ago and yet we see no evidence of this.  I move that all of these notions have a similar root - inflating the hot concerns and topics of the day to cosmic significance and letting them obscure the actual, scientific questions that can be asked and answered.

Life and intelligence in the universe is a topic worth careful consideration, from as many angles as possible.  Let's get started.

 


Space and Time


Those of an anthropic bent have often made much of the fact that we are only 13.7 billion years into what is apparently an open-ended universe that will expand at an accelerating rate forever.  The era of the stars will last a trillion years; why do we find ourselves at this early date if we assume we are a ‘typical’ example of an intelligent observer?  In particular, this has lent support to lines of argument that perhaps the answer to the ‘great silence’ and lack of astronomical evidence for intelligence or its products in the universe is that we are simply the first.  This notion requires, however, that we are actually early in the universe when it comes to the origin of biospheres and by extension intelligent systems.  It has become clear recently that this is not the case. 

The clearest research I can find illustrating this is the work of Sobral et al, illustrated here http://arxiv.org/abs/1202.3436 via a paper on arxiv  and here http://www.sciencedaily.com/releases/2012/11/121106114141.htm via a summary article.  To simplify what was done, these scientists performed a survey of a large fraction of the sky looking for the emission lines put out by emission nebulae, clouds of gas which glow like neon lights excited by the ultraviolet light of huge, short-lived stars.  The amount of line emission from a galaxy is thus a rough proxy for the rate of star formation – the greater the rate of star formation, the larger the number of large stars exciting interstellar gas into emission nebulae.  The authors use redshift of the known hydrogen emission lines to determine the distance to each instance of emission, and performed corrections to deal with the known expansion rate of the universe.  The results were striking.  Per unit mass of the universe, the current rate of star formation is less than 1/30 of the peak rate they measured 11 gigayears ago.  It has been constantly declining over the history of the universe at a precipitous rate.  Indeed, their preferred model to which they fit the trend converges towards a finite quantity of stars formed as you integrate total star formation into the future to infinity, with the total number of stars that will ever be born only being 5% larger than the number of stars that have been born at this time. 

In summary, 95% of all stars that will ever exist, already exist.  The smallest longest-lived stars will shine for a trillion years, but for most of their history almost no new stars will have formed.

At first this seems to reverse the initial conclusion that we came early, suggesting we are instead latecomers.  This is not true, however, when you consider where and when stars of different types can form and the fact that different galaxies have very different histories.  Most galaxies formed via gravitational collapse from cool gas clouds and smaller precursor galaxies quite a long time ago, with a wide variety of properties.  Dwarf galaxies have low masses, and their early bursts of star formation lead to energetic stars with strong stellar winds and lots of ultraviolet light which eventually go supernova.  Their energetic lives and even more energetic deaths appear to usually blast star-forming gases out of their galaxies’ weak gravity or render it too hot to re-collapse into new star-forming regions, quashing their star formation early.  Giant elliptical galaxies, containing many trillions of stars apiece and dominating the cores of galactic clusters, have ample gravity but form with nearly no angular momentum.  As such, most of their cool gas falls straight into their centers, producing an enormous burst of low-heavy-element star formation that uses most of the gas.  The remaining gas is again either blasted into intergalactic space or rendered too hot to recollapse and accrete by a combination of the action of energetic young stars and the infall of gas onto the central black hole producing incredibly energetic outbursts.   (It should be noted that a full 90% of the non-dark-matter mass of the universe appears to be in the form of very thin X-ray-hot plasma clouds surrounding large galaxy clusters, unlikely to condense to the point of star formation via understood processes.)  Thus, most dwarf galaxies and giant elliptical galaxies contributed to the early star formation of the universe but are producing few or no stars today, have very low levels of heavy element rich stars, and are unlikely to make many more going into the future.

Spiral galaxies are different.  Their distinguishing feature is the way they accreted – namely with a large amount of angular momentum.  This allows large amounts of their cool gas to remain spread out away from their centers.  This moderates the rate of star formation, preventing the huge pulses of star formation and black hole activation that exhausts star-forming gas and prevents gas inflow in giant ellipticals.  At the same time, their greater mass than dwarf galaxies ensures that the modest rate of star formation they do undergo does not blast nearly as much matter out of their gravitational pull.  Some does leave over time, and their rate of inflow of fresh cool gas does apparently decrease over time – there are spiral galaxies that do seem to have shut down star formation.  But on the whole a spiral is a place that maintains a modest rate of star formation for gigayears, while heavy elements get more and more enriched over time.  These galaxies thus dominate the star production in the later eras of the universe, and dominate the population of stars produced with large amounts of heavy elements needed to produce planets like ours.  They do settle down slowly over time, and eventually all spirals will either run out of gas or merge with each other to form giant ellipticals, but for a long time they remain a class apart.

Considering this, we’re just about where we would expect a planet like ours (and thus a biosphere-as-we-know-it) to exist in space and on a coarse scale in time.  Let’s look closer at our galaxy now.  Our galaxy is generally agreed to be about 12 billion years old based on the ages of globular clusters, with a few interloper stars here and there that are older and would’ve come from an era before the galaxy was one coherent object.  It will continue forming stars for about another 5 gigayears, at which point it will undergo a merger with the Andromeda galaxy, the nearest large spiral galaxy.  This merger will most likely put an end to star formation in the combined resultant galaxy, which will probably wind up as a large elliptical after one final exuberant starburst.  Our solar system formed about 4.5 gigayears ago, putting its formation pretty much halfway along the productive lifetime of the galaxy (and probably something like 2/3 of the way along its complement of stars produced, since spirals DO settle down with age, though more of its later stars will be metal-rich).

On a stellar and planetary scale, we once again find ourselves where and when we would expect your average complex biosphere to be.  Large stars die fast – star brightness goes up with the 3.5th power of star mass, and thus star lifetime goes down with the 2.5th power of mass.  A 2 solar mass star would be 11 times as bright as the sun and only live about 2 billion years – a time along the evolution of life on Earth before photosynthesis had managed to oxygenate the air and in which the majority of life on earth (but not all – see an upcoming post) could be described as “algae”.  Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet. 

All stars also slowly brighten as they age – the sun is currently about 30% brighter than it was when it formed, and it will wind up about twice as bright as its initial value just before it becomes a red giant.  Depending on whose models of climate sensitivity you use, the Earth’s biosphere probably has somewhere between 250 million years and 2 billion years before the oceans boil and we become a second Venus.  Thus, we find ourselves in the latter third-to-twentieth of the history of Earth’s biosphere (consistent with complex life taking time to evolve).

Together, all this puts our solar system – and by extension our biosphere – pretty much right where we would expect to find it in space, and right in the middle of where one would expect to find it in time.  Once again, as observers we are not special.  We do not find ourselves in the unexpectedly early universe, ruling out one explanation for the Fermi paradox sometimes put forward – that we do not see evidence for intelligence in the universe because we simply find ourselves as the first intelligent system to evolve.  This would be tenable if there was reason to think that we were right at the beginning of the time in which star systems in stable galaxies with lots of heavy elements could have birthed complex biospheres.  Instead we are utterly average, implying that the lack of obvious intelligence in the universe must be resolved either via the genesis of intelligent systems being exceedingly rare or intelligent systems simply not spreading through the universe or becoming astronomically visible for one reason or another. 

In my next post, I will look at the history of life on Earth, the distinction between simple and complex biospheres, and the evidence for or against other biospheres elsewhere in our own solar system.

Solving sleep: just a toe-dipping

37 Capla 30 June 2015 07:38PM
[For the past few months I’ve been undertaking a mostly independent study of sleep, and looking to build a coherent model of what sleep does and find ways to optimize it. I’d like to write a series of posts outlining my findings and hypotheses. I’m not sure if this is the best venue for such a project, and I’d like to gauge community interest. This first post is a brief overview of one important aspect of sleep, with a few related points of recommendation, to provide some background knowledge.]

 

In the quest to become more effective and productive, sleep is an enormously important process to optimize. Most of us spend (or at least think we should spend) 7.5 to 8.5 hours in bed every night, a third of a 24 hour day. Not sleeping well and not sleeping sufficiently have known and large drawbacks, including decreased attention, greater irritability, depressed immune function, and generally weakened cognitive ability. If you’re looking for more time, either for subjective life-extension, or so that you can get more done in a day, taking steps to sleep most efficiently, so as to not spend more than the required amount of time in bed and to get the full benefit of the rest, is of high value.

Understanding the inner mechanisms of this process, can let us work around them. Sleep, baffling as it is (and it is extremely baffling), is not a black box. Knowing how it works, you can organize your behavior to accommodate the world as it is, just as taking advantage of the principles of aerodynamics, thrust, and lift, enables one to build an airplane.

The most important thing to know about sleep and wakefulness is that it is the result of a dual process: how alert a person feels is determined by two different and opposite functions. The first is termed  the homeostatic sleep drive (also, homeostatic drive, sleep load, sleep pressure, and process S), which is determined solely by how long it has been since an individual has last slept fully. The longer he/she’s been awake, the greater his/her sleep drive. It is the brain's biological need to sleep. Just as sufficient need for calories produces hunger, sufficient sleep-drive produces sleepiness. Sleeping decreases sleep drive, and sleep drive drops faster (when sleeping) then it rises (when awake).

Neuroscience is complicated, but it seems the chemical correlate of sleep drive is the build-up of adenosine in the basal forebrain and this is used as the brain’s internal measure of how badly one needs sleep.1 (Caffeine makes us feel alert by competing with adenosine for bonding sites and thereby inhibiting reuptake.)

This is only half the story, however. Adenosine levels are much higher (and sleep drive correspondingly lower) in the evening, when one has been awake for a while, than in the middle of the night, when one has just slept for several hours. If sleepiness were only determined by sleep drive, you would have a much more fragmented sleep: sleeping several times during the day, and waking up several times during the night. Instead, humans typically stay awake through the day, and sleep through the whole night. This is due to the second influence on wakefulness: the circadian alerting signal.

For most of human history, there was little that could be done at night. Darkness made it much more difficult to hunt or gather than it was during day. Given that the brain requires some fraction of the nychthemeron (meaning a 24-hour period) asleep, it is evolutionarily preferable to concentrate that fraction of of the nychthemeron in the nighttime, freeing the day to do other things. For this reason, there is also a cyclical component to one’s alertness: independent of how long it has been since an individual has slept, there will be times in the nychthemeron when he/she will feel more or less tired.   

Roughly, the circadian alerting signal (also known as process C) counters the sleep-drive, so that as sleep drive builds up during the day, alertness stays constant, and as sleep drive increases over the course of the night, the individual will stay asleep.

The alerting signal is synchronized to circadian rhythms, which are in turn attuned to light exposure. The circadian clock is set so that the alerting signal begins to increase again (after a night of sleep) at the time when the optic nerve is first exposed to light in the morning (or rather, when the the optic nerve has habitually been first exposed to light, since it takes up to a week to reset circadian rhythms), and increases with the sleep drive until about 14 hours later (from the point that the alerting signal started rising).

This is why if you pull an “all-nighter” you might find it difficult to fall asleep during the following day, even if you feel exhausted. Your sleep drive is high, but the alerting signal is triggering wakefulness, which makes it hard to fall asleep.

For unknown reasons, there is a dip in the circadian alerting about 8 hours after the beginning of the cycle. This is why people sometimes experience that “2:30 feeling.” This is also the time at which biphasic cultures typically have an afternoon siesta. This is useful to know, because this is the best time to take a nap if you want to make up sleep missed the night before.

 

http://bonytobombshell.com/wp-content/uploads/2015/05/energy-levels-sleep-drive-alert-chart-1-bony-bombshell.jpg

 

The neurochemistry of the circadian alerting signal is more complex than that of the sleep drive, but one of the key chemicals of process C is melatonin, which is secreted by the pineal gland about 12 hours after the start of the circadian cycle (two hours before habitual bedtime). It is mildly sleep-inducing.

This is why taking melatonin tablets before is recommended by gwern and others. I second this recommendation. Though not FDA-approved, there seem to be little in the way of negative side effects and they make it much easier to fall asleep.

The natural release of melatonin is inhibited by light, and in particular blue light (which is why it is beneficial applications to red-shift the light of their computer screens, like flux or reds.shift, or wear red-tinted goggles, before bed). By limiting light exposure in the late evening you allow natural melatonin secretion, which both stimulates sleep and prevents the circadian clock from shifting (which would make it even more difficult to fall asleep the following night). Recent studies have shown bright screens ant night do demonstrably disrupt sleep.2

The thing that interests me about this fact that alertness is controlled by both process S and process C, is that it may be possible to modulate each of those processes independently. It would be enormously useful to be able to “turn off” the circadian alerting signal on demand, so that a person can fall asleep at any time off the day, to make up sleep loss whenever is convenient. Instead of accommodating circadian rhythms when scheduling, we could adjust the circadian effect to better fit our lives. When you know you’ll need to be awake all night, for instance, you could turn off the alerting signal around midday and sleep until your sleep drive is reset. In fact, is suspect that those people who are able to live successfully on a polyphasic sleep schedule get the benefits by retraining the circadian influence. In the coming posts, I want to outline a few of the possibilities and (significant) problems in that direction. 

continue reading »

Pattern-botching: when you forget you understand

31 malcolmocean 15 June 2015 10:58PM

It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story:

Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!”

“I don’t care,” she said with a sneer.

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

 

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

 

(Got one?)

 

Here's my take: I pattern-matched a bunch of actual preferences she had with a general "healthy-eating" cluster, and then I went and pulled out something random that felt vaguely associated. It's telling, I think, that I don't even explicitly believe that vegetarianism is healthy. But to my pattern-matcher, they go together nicely.

I'm going to call this pattern-botching.† Pattern-botching is when you pattern-match a thing "X", as following a certain model, but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

†Maybe this already has a name, but I've read a lot of stuff and it feels like a distinct concept to me.

Examples of pattern-botching

So, that's pattern-botching, in a nutshell. Now, examples! We'll start with some simple ones.

Calmness and pretending to be a zen master

In my Againstness Training video, past!me tries a bunch of things to calm down. In the pursuit of "calm", I tried things like...

  • dissociating
  • trying to imitate a zen master
  • speaking really quietly and timidly

None of these are the desired state. The desired state is present, authentic, and can project well while speaking assertively.

But that would require actually being in a different state, which to my brain at the time seemed hard. So my brain constructed a pattern around the target state, and said "what's easy and looks vaguely like this?" and generated the list above. Not as a list, of course! That would be too easy. It generated each one individually as a plausible course of action, which I then tried, and which Val then called me out on.

Personality Types

I'm quite gregarious, extraverted, and generally unflappable by noise and social situations. Many people I know describe themselves as HSPs (Highly Sensitive Persons) or as very introverted, or as "not having a lot of spoons". These concepts are related—or perhaps not related, but at least correlated—but they're not the same. And even if these three terms did all mean the same thing, individual people would still vary in their needs and preferences.

Just this past week, I found myself talking with an HSP friend L, and noting that I didn't really know what her needs were. Like I knew that she was easily startled by loud noises and often found them painful, and that she found motion in her periphery distracting. But beyond that... yeah. So I told her this, in the context of a more general conversation about her HSPness, and I said that I'd like to learn more about her needs.

L responded positively, and suggested we talk about it at some point. I said, "Sure," then added, "though it would be helpful for me to know just this one thing: how would you feel about me asking you about a specific need in the middle of an interaction we're having?"

"I would love that!" she said.

"Great! Then I suspect our future interactions will go more smoothly," I responded. I realized what had happened was that I had conflated L's HSPness with... something else. I'm not exactly sure what, but a preference for indirect communication, perhaps? I have another friend, who is also sometimes short on spoons, who I model as finding that kind of question stressful because it would kind of put them on the spot.

I've only just recently been realizing this, so I suspect that I'm still doing a ton of this pattern-botching with people, that I haven't specifically noticed.

Of course, having clusters makes it easier to have heuristics about what people will do, without knowing them too well. A loose cluster is better than nothing. I think the issue is when we do know the person well, but we're still relying on this cluster-based model of them. It's telling that I was not actually surprised when L said that she would like it if I asked about her needs. On some level I kind of already knew it. But my botched pattern was making me doubt what I knew.

False aversions

CFAR teaches a technique called "Aversion Factoring", in which you try to break down the reasons why you don't do something, and then consider each reason. In some cases, the reasons are sound reasons, so you decide not to try to force yourself to do the thing. If not, then you want to make the reasons go away. There are three types of reasons, with different approaches.

One is for when you have a legitimate issue, and you have to redesign your plan to avert that issue. The second is where the thing you're averse to is real but isn't actually bad, and you can kind of ignore it, or maybe use exposure therapy to get yourself more comfortable with it. The third is... when the outcome would be an issue, but it's not actually a necessary outcome of the thing. As in, it's a fear that's vaguely associated with the thing at hand, but the thing you're afraid of isn't real.

All of these share a structural similarity with pattern-botching, but the third one in particular is a great example. The aversion is generated from a property that the thing you're averse to doesn't actually have. Unlike a miscalibrated aversion (#2 above) it's usually pretty obvious under careful inspection that the fear itself is based on a botched model of the thing you're averse to.

Taking the training wheels off of your model

One other place this structure shows up is in the difference between what something looks like when you're learning it versus what it looks like once you've learned it. Many people learn to ride a bike while actually riding a four-wheeled vehicle: training wheels. I don't think anyone makes the mistake of thinking that the ultimate bike will have training wheels, but in other contexts it's much less obvious.

The remaining three examples look at how pattern-botching shows up in learning contexts, where people implicitly forget that they're only partway there.

Rationality as a way of thinking

CFAR runs 4-day rationality workshops, which currently are evenly split between specific techniques and how to approach things in general. Let's consider what kinds of behaviours spring to mind when someone encounters a problem and asks themselves: "what would be a rational approach to this problem?"

  • someone with a really naïve model, who hasn't actually learned much about applied rationality, might pattern-match "rational" to "hyper-logical", and think "What Would Spock Do?"
  • someone who is somewhat familiar with CFAR and its instructors but who still doesn't know any rationality techniques, might complete the pattern with something that they think of as being archetypal of CFAR-folk: "What Would Anna Salamon Do?"
  • CFAR alumni, especially new ones, might pattern-match "rational" as "using these rationality techniques" and conclude that they need to "goal factor" or "use trigger-action plans"
  • someone who gets rationality would simply apply that particular structure of thinking to their problem

In the case of a bike, we see hundreds of people biking around without training wheels, and so that becomes the obvious example from which we generalize the pattern of "bike". In other learning contexts, though, most people—including, sometimes, the people at the leading edge—are still in the early learning phases, so the training wheels are the rule, not the exception.

So people start thinking that the figurative bikes are supposed to have training wheels.

Incidentally, this can also be the grounds for strawman arguments where detractors of the thing say, "Look at these bikes [with training wheels]! How are you supposed to get anywhere on them?!"

Effective Altruism

We potentially see a similar effect with topics like Effective Altruism. It's a movement that is still in its infancy, which means that nobody has it all figured out. So when trying to answer "How do I be an effective altruist?" our pattern-matchers might pull up a bunch of examples of things that EA-identified people have been commonly observed to do.

  • donating 10% of one's income to a strategically selected charity
  • going to a coding bootcamp and switching careers, in order to Earn to Give
  • starting a new organization to serve an unmet need, or to serve a need more efficiently
  • supporting the Against Malaria Fund

...and this generated list might be helpful for various things, but be wary of thinking that it represents what Effective Altruism is. It's possible—it's almost inevitable—that we don't actually know what the most effective interventions are yet. We will potentially never actually know, but we can expect that in the future we will generally know more than at present. Which means that the current sampling of good EA behaviours likely does not actually even cluster around the ultimate set of behaviours we might expect.

Creating a new (platform for) culture

At my intentional community in Waterloo, we're building a new culture. But that's actually a by-product: our goal isn't to build this particular culture but to build a platform on which many cultures can be built. It's like how as a company you don't just want to be building the product but rather building the company itself, or "the machine that builds the product,” as Foursquare founder Dennis Crowley puts it.

What I started to notice though, is that we started to confused the particular, transitionary culture that we have at our house, with either (a) the particular, target culture, that we're aiming for, or (b) the more abstract range of cultures that will be constructable on our platform.

So from a training wheels perspective, we might totally eradicate words like "should". I did this! It was really helpful. But once I had removed the word from my idiolect, it became unhelpful to still be treating it as being a touchy word. Then I heard my mentor use it, and I remembered that the point of removing the word wasn't to not ever use it, but to train my brain to think without a particular structure that "should" represented.

This shows up on much larger scales too. Val from CFAR was talking about a particular kind of fierceness, "hellfire", that he sees as fundamental and important, and he noted that it seemed to be incompatible with the kind of culture my group is building. I initially agreed with him, which was kind of dissonant for my brain, but then I realized that hellfire was only incompatible with our training culture, not the entire set of cultures that could ultimately be built on our platform. That is, engaging with hellfire would potentially interfere with the learning process, but it's not ultimately proscribed by our culture platform.

Conscious cargo-culting

I think it might be helpful to repeat the definition:

Pattern-botching is you pattern-match a thing "X", as following a certain model, but then but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

It's kind of like if you were doing a cargo-cult, except you knew how airplanes worked.

(Cross-posted from malcolmocean.com)

The File Drawer Effect and Conformity Bias (Election Edition)

31 Salemicus 08 May 2015 04:51PM

As many of you may be aware, the UK general election took place yesterday, resulting in a surprising victory for the Conservative Party. The pre-election opinion polls predicted that the Conservatives and Labour would be roughly equal in terms of votes cast, with perhaps a small Conservative advantage leading to a hung parliament; instead the Conservatives got 36.9% of the vote to Labour's 30.4%, and won the election outright.

There has already been a lot of discussion about why the polls were wrong, from methodological problems to incorrect adjustments. But perhaps more interesting is the possibility that the polls were right! For example, Survation did a poll on the evening before the election, which predicted the correct result (Conservatives 37%, Labour 31%). However, that poll was never published because the results seemed "out of line." Survation didn't want to look silly by breaking with the herd, so they just kept quiet about their results. Naturally this makes me wonder about the existence of other unpublished polls with similar readings.

This seems to be a case of two well know problems colliding with devastating effect. Conformity bias caused Survation to ignore the data and go with what they "knew" to be the case (for which they have now paid dearly). And then the file drawer effect meant that the generally available data was skewed, misleading third parties. The scientific thing to do is to publish all data, including "outliers," both so that information can change over time rather than be anchored, and to avoid artificially compressing the variance. Interestingly, the exit poll, which had a methodology agreed beforehand and was previously committed to be published, was basically right.

This is now the third time in living memory that opinion polls have been embarrassingly wrong about the UK general election. Each time this has lead to big changes in the polling industry. I would suggest that one important scientific improvement is for polling companies to announce the methodology of a poll and any adjustments to be made before the poll takes place, and commit to publishing all polls they carry out. Once this became the norm, data from any polling company that didn't follow this practice would be rightly seen as unreliable by comparison.

Leaving LessWrong for a more rational life

29 Mark_Friedenbach 21 May 2015 07:24PM

You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.

As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

Philosophy as the anti-science...

What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.

A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.

The lens that sees its own flaws...

Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.

I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.

And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.

What next?

How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.

A note about effective altruism…

One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.

Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.

This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.

How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

Where should I send my charitable donations?

Aubrey de Grey's SENS Research Foundation.

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:

  • Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
  • Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
  • B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.

I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.

Addendum regarding unfinished business

I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.

EDIT: Obviously I'll stick around long enough to answer questions below :)

Brainstorming new senses

28 lululu 20 May 2015 07:53PM

What new senses would you like to have available to you?

Often when new technology first becomes widely available, the initial limits are in the collective imagination, not in the technology itself (case in point: the internet). New sensory channels have a huge potential because the brain can process senses much faster and more intuitively than most conscious thought processes.

There are a lot of recent "proof of concept" inventions that show that it is possible to create new sensory channels for humans with and without surgery. The most well known and simple example is an implanted magnet, which would alert you to magnetic fields (the trade-off being that you could never have an MRI). Cochlear implants are the most widely used human-created sensory channels (they send electrical signals directly to the nervous system, bypassing the ear entirely), but CIs are designed to emulate a sensory channel most people already have brain space allocated to. VEST is another example. Similar to CIs, VEST (versatile extra-sensory transducer) has 24 information channels, and uses audio compression to encode sound. Unlike CIs, they are not implanted in the skull but instead information is relayed through vibrating motors on the torso. After a few hours of training, deaf volunteers are capable of word recognition using the vibrations alone, and to do so without conscious processing. Much like hearing, the users are unable to describe exactly what components make a spoken word intelligible, they just understand the sensory information intuitively. Another recent invention being tested (with success) is BrainPort glasses, which send electrical signals through the tongue (which is one of the most sensitive organs on the body). Blind people can begin processing visual information with this device within 15 minutes, and it is unique in that it is not implanted. The sensory information feels like pop rocks at first before the brain is able to resolve it into sight. Niel Harbisson (who is colorblind) has custom glasses which use sound tones to relay color information. Belts that vibrate when facing north give people an sense of north. Bottlenose can be built at home and gives a very primitive sense of echolocation. As expected, these all work better if people start young as children. 

What are the craziest and coolest new senses you would like to see available using this new technology? I think VEST at least is available from Kickstarter and one of the inventors suggested that it could be that it could be programmed to transmit any kind of data. My initial ideas which I heard about this possibility are just are senses that some unusual people already have or expansions on current senses. I think the real game changers are going to be totally knew senses unrelated to our current sensory processing. Translating data into sensory information gives us access to intuition and processing speed otherwise unavailable. 

My initial weak ideas:

  • mass spectrometer (uses reflected lasers to determine the exact atomic makeup of anything and everything)
  • proximity meter (but I think you would begin to feel like you had a physical aura or field of influence)
  • WIFI or cell signal
  • perfect pitch and perfect north, both super easy and only need one channel of information (an smartwatch app?)
  • infrared or echolocation
  • GPS (this would involve some serious problem solving to figure out what data we should encode given limited channels, I think it could be done with 4 or 8 channels each associated with a cardinal direction)

Someone working with VEST suggested:

  • compress global twitter sentiments into 24 channels. Will you begin to have an intuitive sense of global events?
  • encode stockmarket data. Will you become an intuitive super-investor?
  • encode local weather data (a much more advanced version of "I can feel it's going to rain in my bad knee)

Some resources for more information:

 

 

More?

Update on the Brain Preservation Foundation Prize

26 Andy_McKenzie 26 May 2015 01:47AM

Brain Preservation Foundation President Kenneth Hayworth just wrote a synopsis of the recent ongoings from the major two competitors for the BPF prizes. Here is the summary: 

Brain Preservation Prize competitor Shawn Mikula just published his whole mouse brain electron microscopy protocol in Nature Methods (paper, BPF interview), putting him close to winning the mouse phase of our prize.

Brain Preservation Prize competitor 21st Century Medicine has developed a new “Aldehyde-Stabilized Cryopreservation” technique–preliminary results show good ultrastructure preservation even after storage of a whole rabbit brain at -135 degrees C.

This work was funded in part from donations from LW users. In particular, a grant to support the work of LW user Robert McIntyre at 21st Century Medicine that the BPF was able to provide has been instrumental. 

In order to continue this type of research and to bolster it, BPF welcomes your support in a variety of different ways, including awareness-raising, donations, and volunteering. Please reach out if you would like to volunteer, or you can PM me and I will help put you in touch. And if you have any suggestions for the BPF, please feel free to discuss them in the comments below. 

Strategies and tools for getting through a break up

26 lululu 18 May 2015 06:01PM

Background:

I was very recently (3 weeks now) in a relationship that lasted for 5.5 years. My partner had been fantastic through all those years and we were suffering no conflict, no fights, no strain or tension. My partner also was prone to depression, and is/was going through an episode of depression. I am usually a major source of support at these times. Six months ago we opened our relationship. I wasn't dating anyone (mostly due to busy-ness), and my partner was, though not seriously. I felt him pulling away somewhat, which I (correctly) attributed mostly to depression and which nonetheless caused me some occasional moments of jealousy. But I was overall extremely happy with this relationship, very committed, and still very much in love as well. It was quite a surprise when my partner broke up with me one Wednesday evening. 

After we had a good cry together, the next morning I woke up and immediately started researching what the literature said about breaking up. My goals were threefold:

 

  1. Stop feeling so sad in the immediate moment
  2. "Get over" my partner
  3. Internalize any gains I had made over the course of our relationship or any lessons I had learned from the break up

 

I made most of my gains in the first few days, by day 3 I was 60% over it. Two weeks later I was 99.5% over the relationship, with a few hold-over habits and tendencies (like feeling responsible for improving his emotional state) which are currently too strong but which will serve me well in our continuing friendship. My ex, on the other hand (no doubt partially due to the depression) is fine most of the time but unpredictably becomes extremely sad for hours on end. Originally this was guilt at having hurt me but now it is mostly nostalgia+isolation based. I hope to continue being close friends and I've been doing my best to support him emotionally, at the distance of a friend. At the same time, I've started semi-seriously dating a friend who has had a crush on me for some time, and not in a rebound way. Below are the states of mind and strategies that allowed me to get over it, fast and with good personal growth. 

Note: mileage may vary. I have low neuroticism and a slightly higher than average base level of happiness. You might not get over the relationship in 2 weeks, but your getting-over-it will certainly be sped up from their default speed.

 

Strategies (in order of importance)

1. Decide you don't want to get back in the relationship. Decide that it is over and given the opportunity, you will not get back with this person. If you were the breaker-upper, you can skip this step.

Until you can do this, it is unlikely that you will get over it. It's hard to ignore an impulse that you agree with wholeheartedly. If you're always hoping for an opportunity or an argument or a situation that will bring you back together, most of your mental energy will go towards formulating those arguments, planning for that situation, imagining that opportunity. Some of the below strategies can still be used, but spend some serious time on this first one. It's the foundation of everything else. There are some facts that can help you convince the logical part of brain that this is the correct attitude. 

  • People in on-and-off relationships are less satisfied, feel more anxiety about their relationship status, and continue to cycle on-and-off even after couples add additional constraints like cohabitation or marriage
  • People in tumultuous relationships are much less happy than singles
  • Wanting to stay in a relationship is reinforced by many biases (status quo bias, ambiguity effect, choice supportive bias, loss aversion, mere-exposure effect, ostrich effect). For someone to break through all those biases and end things, they must be extremely unhappy. If your continuing relationship makes someone you love extremely unhappy, it is a disservice again to capitalize on those biases in a moment of weakness and return to the relationship.
  • Being in a relationship with someone who isn't excited about and pleased by you is settling for an inferior quality of relationship. The amazing number of date-able people in the world means settling for this is not an optimal decision. Contrast this to a tribal situation where replacing a lost mate was difficult or impossible. All these feelings of wanting to get back together evolved in a situation of scarcity, but we live in a world of plenty. 
  • Intermittent rewards are the most powerful, so an on-again-off-again relationship has the power to make you commit to things you would never commit to given a new relationship. The more hot-and-cold your partner is, the more rewarding the relationship seems and the less likely you are to be happy in the long term. Only you can end that tantalizing possibility of intermittent rewards by resolving not to partake if the opportunity arises. 
  • Even if some extenuating circumstance could explain away their intention to break up (depression, bipolar, long-distance, etc), it is belittling to your ex-partner to try to invalidate their stated feelings. Do not fall into the trap of feeling that you know more about a person's inner state than they do. Take it at face value and act accordingly. Even if this is only a temporary state of mind for them, it is unlikely that they will never ever again be in the same state of mind. 
More arguments depend on your situation. Like leftover french fries, very few relationships are as good when you try to revive them, it's better just to get new french fries. 


 

2. Talk to other people about the good things that came of your break-up.  (This can also help you arrive at #1, not wanting to get back together)

I speculate that benefits from this come from three places. First, talking about good thinks makes you notice good things and talking in a positive attitude makes you feel positive. Second, it re-emphasizes to your brain that losing your significant other does not mean losing your social support network. Third, it acts as a mild commitment mechanism - it would be a loss of face to go on about how great you're doing outside the relationship and later have to explain you jumped back in at the first opportunity.

You do not need to be purely positive. If you are feeling sadness, it sometimes helps to talk about this. But don't dwell only on the sadness when you talk. When I was talking to my very close friends about all aspects of my feelings, I still tried to say two positive things for every negative thing. For example: "It was a surprise, which was jarring and unpleasant and upended my life plans in these ways. But being a surprise, I didn't have time to dread and dwell on it beforehand. And breaking up sooner is preferable to a long decline in happiness for both parties, so its better to break up as soon as it becomes clear to either party that the path is headed downhill, even if it is surprising to the other party."

Talk about the positives as often as possible without alienating people. The people you talk to do not need to be serious close friends. I spend a collective hour and a half talking to two OKCupid dates about how many good things came from the break up. (Both dates had been scheduled before actually breaking up, both people had met me once prior, and both dates went surprisingly well due to sympathy, escalating self-disclosure, and positive tone. I signaled that I am an emotionally healthy person dealing well with an understandably difficult situation). 

If you feel that you don't have any candidates for good listeners either because the break up was due to some mistake or infidelity of yours, or because you are socially isolated/anxious, writing is an effective alternative to talking. Study participants recovered quicker when they spent 15 minutes writing about the positive aspects of their break up, participants with three 15 minute sessions did better still. And it can benefit anyone to keep a running list of positives to can bring up out in conversation. 

 

3. Create a social support system

Identify who in your social network can still be relied on as a confidant and/or a neutral listener. You would be surprised at who still cares about you. In my breakup, my primary confidant was my ex's cousin, who also happens to be  my housemate and close friend. His mom and best friend, both in other states, also made the effort to inquire about my state of mind. Most of the time, even people who you consider your partner's friends still feel enough allegiance to you and enough sympathy to be good listeners and through listening they can become your friends. 

If you don't currently have a support system, make one! OKCupid is a great resource for meeting friends outside of just dating, and people are way way more likely to want to meet you if you message them with a "just looking for friends" type message. People  you aren't currently close to but who you know and like can become better friends if you are willing to reveal personal/vulnerable stories. Escalating self-disclosure+symmetrical vulnerability=feelings of friendship. Break ups are a great time for this to happen because you've got a big vulnerability, and one which almost everyone has experienced. Everyone has stories to share and advice to give on the topic of breaking up. 

 

4. Intentionally practice differentiation

One of the most painful parts of a break up is that so much of your sense-of-self is tied into your relationship. You will be basically rebuilding your sense of self. Depending on the length and the committed-ness of the relationship, you may be rebuilding it from the ground up. Think of this as an opportunity. You can rebuild it an any way you desire. All the things you used to like before your relationship, all the interests and hobbies you once cared about, those can be reincorporated into your new, differentiated sense of self. You can do all the things you once wished you did.

Spend at least 5 minutes thinking about what your best self looks like. What kind of person do you wish to be? This is a great opportunity to make some resolutions. Because you have a fresh start, and because these resolutions are about self-identification, they are much more likely to stick. Just be sure to frame them in relation to your sense-of-self: not 'I will exercise,' instead 'I'm a fit active person, the kind of person who exercises' not 'I want to improve my Spanish fluency' but 'I'm a Spanish speaking polygot, the kind of person who is making an big effort to become fluent.'

Language is also a good tool to practice differentiation. Try not to use the word "we," "us," of "our," even in your head. From now on, it is "s/he and I," "me and him/her," or "mine and his/hers." Practice using the word "ex" a lot. Memories are re-formulated and overwritten each time we revisit them, so in your memories make sure to think of you two as separate independent people and not as a unit.  

 

5. Make use of the following mental frameworks to re-frame your thinking:

Over the relationship vs. over the person

You do not have to stop having romantic, tender, or lustful feelings about your ex to get over the relationship. Those type of feelings are not easily controlled, but you can have those same feelings for good friends or crushes without it destroying your ability to have a meaningful platonic relationship, why should this be different?

Being over the relationship means: 

 

  • Not feeling as though you are missing out on being part of a relationship.
  • Not dwelling/ruminating/obsessing about your ex-partner (includes both positive, negative and neutral thoughts "they're so great" and "I hate them and hope they die" and "I wonder what they are up to". 
  • Not wishing to be back with your ex-partner.
  • Not making plans that include consideration of your ex-partner because these considerations are no longer important (this includes considerations like "this will make him/her feel sorry I'm gone," or "this will show him/her that I'm totally over it")
  • Being able to interact with people without your ex-partner at your side and not feel weird about it, especially things you used to do together (eg. a shared hobby or at a party)
  • In very lucky peaceful-breakup situations, being able to interact with your ex-partner and maybe even their current romantic interests without it being too horribly weird and unpleasant.

 

On the other hand, being over a person means experiencing no pull towards that person, romantic, emotional, or sexual. If your break up was messy, you can be over the person without being over the relationship. This is often when people turn to messy and unsatisfying rebound relationships. It is far far more important to be over the relationship, and some of us (me included) will just have to make peace with never being over the person, with the help of knowing that having a crush on someone does not necessarily have the power to make you miserable or destroy your friendship. 

Obsessive thinking and cravings

If you used a brain scanner to look at a person who has been recently broken up with, and then you used the same brain scanner to look at someone who recently sobered up from an addictive drug, their brain activity would be very similar. So similar, in fact, that some neurologists speculate that addiction hijacks the circuits for romantic obsession (there is a very plausible evolutionary reason for romantic obsession to exist in early human tribal societies. Addiction, less so). 

In cases of addiction/craving, you can't just force your mind to stop thinking thoughts you don't like. But you can change your relationship with those thoughts. Recognize when they happen. Identify them as a craving rather than a true need. Recognize that, when satisfied, cravings temporarily diminish and then grow stronger (you've rewarded your brain for that behavior). These are thoughts without substance. The impulse they drive you towards will increase, rather than decrease, unpleasant feelings. 

When I first broke up, I had a couple very unpleasant hours of rumination, thinking uncontrollably about the same topics over and over despite those topics being painful. At some point I realized that continuing to merely think about the break up was also addictive. My craving circuits just picked the one set of thoughts I couldn't argue against so that my brain could go on obsessively dwelling without me being able to pull a logic override. These thoughts SEEM like goal oriented thinking, they FEEL productive, but they are a wolf in sheep's clothing.

In my specific case, my brain was concern trolling me. Concern trolling on the internet is when someone expresses sympathy and concern while actually having ulterior motives (eg on a body-positive website, fat shaming with: "I'm so glad you're happy but I'm concerned that people will think less of you because of your weight"). In my case, I was worrying about my ex's depression and his state of mind, which are very hard thoughts to quash. Empathy and caring are good, right? And he really was going through a hard time. Maybe I should call and check up on him.... My brain was concern trolling me. 

Depending on how your relationship ended, your brain could be trolling in other ways. Flaming seems to be a popular set of unstoppable thoughts. If you can't argue with the thought that the jerk is a horrible person, then THAT is the easiest way for your brain's addictive circuits to happily go on obsessing about this break up. Nostalgia is also a popular option. If the memories were good, then it's hard to argue with those thoughts. If you're a well trained rationalist, you might notice that you are feeling confused and then burn up many brain cycles trying to resolve your confusion by making sense of a fact, despite it not being a rational thing. Your addictive circuits can even hijack good rationalist habits. Other common ruminations are problem solving, simulating possible futures, regret, counter-factual thinking. 

As I said, you can't force these parts of your brain to just shut up. That's not how craving works. But you can take away their power by recognizing that all your ruminating is just these circuits hijacking your normal thought process. Say to yourself "I feeling an urge to call and yell at him/her, but so what. Its just a meaningless craving."

What you lose

There is a great sense of loss that comes with the end of a relationship. For some people, it is a similar feeling to actually being in mourning. Revisiting memories becomes painful, things you used to do together are suddenly tinged with sadness. 

I found it helpful to think of my relationship as a book. A book with some really powerful life-changing passages in the early chapters, a good rising action, great characters. A book which made me a better person by reading it. But a book with a stupid deus ex machina ending that totally invalidated the foreshadowing in the best passages. Finishing the book can be frustrating and saddening, but the first chapters of book still exist. Knowing that the ending sucks isn't going to stop the first chapters from being awesome and entertaining and powerful. And I could revisit those first chapters any time I liked. I could just read my favorite parts without needing to read the whole stupid ending. 

You don't lose your memories. You don't lose your personal growth. Any gains you made while you were with someone, anything new that they introduced you to, or helped you to improve on, or nagged at you till you had a new better habit, you get to keep all of those. That show you used to watch together, it is still there and you still get to watch it and care about it without him/her. The bar you used to visit together is still there too. All those photos are still great pictures of both of you in interesting places. Depending on the situation of the break up, your mutual friends are still around. Even your ex still exists and is still the same person you liked before, and breaking up doesn't mean you'll never see them again unless that's what you guys want/need. 

The only thing you definitely lose at the end of a relationship is the future of that relationship. You are losing something that hasn't happened yet, something which never existed. The only thing you are losing is what you imagined someday having. It's something similar to the endowment effect: you assumed this future was yours so you assigned it a lot of value. But it never was yours, you've lost something which doesn't exist. It's still a painful experience, but realizing all of this helped me a lot. 

Additional Reading:

http://wiki.lesswrong.com/wiki/Dealing_with_a_Major_Personal_Crisis

Addendum:

Comparisons and self-esteem:

Brains are built to compare and optimize, so one difficult problem I've faced in the months after the break up was seeing my ex date other people. I had trouble because my unconscious impulse is to think "he has chosen them over me." This thinking pattern is instant, unconscious, and hard to break. And it comes with a big hit to either self esteem or my willingness to humanize these actual humans he is dating.

It was helpful to remind myself that the break up occurred because the relationship was broken. There is a heavy opportunity cost to date someone with whom it can never work out or with whom you are not happy. That opportunity cost is the freedom to seek a better relationship. So I shouldn't be comparing myself to any flesh-and-blood person. He chose opportunity and freedom over me. And its just not possible to compare yourself to a a concept like that in a way that makes sense. The people that come as a result that choice are irrelevant.

Milestones:

It took me 2 weeks to be over this particular relationship, it took me a month and a half to not wish I was in some relationship, to get excited and happy about being single. It was 3 months before dating and experiencing new people started to sound like it might be fun/interesting.

Long Tail of Sadness:

During the period after the break up, for about 3 months, I had to be extra careful to have enough sleep, drink enough water, get sunshine, eat enough, and meditate. If my physical state was normal, I almost always felt great, acted normal, and rarely thought about my ex. But if I let myself get into a physical state which would normally cause a generalized bad mood, I would more often find myself ruminating on the break up. Sleep is medicine.

Analogical Reasoning and Creativity

25 jacob_cannell 01 July 2015 08:38PM

This article explores analogism and creativity, starting with a detailed investigation into IQ-test style analogy problems and how both the brain and some new artificial neural networks solve them.  Next we analyze concept map formation in the cortex and the role of the hippocampal complex in establishing novel semantic connections: the neural basis of creative insights.  From there we move into learning strategies, and finally conclude with speculations on how a grounded understanding of analogical creative reasoning could be applied towards advancing the art of rationality.


  1. Introduction
  2. Under the Hood
  3. Conceptual Abstractions and Cortical Maps
  4. The Hippocampal Association Engine
  5. Cultivate memetic heterogeneity and heterozygosity
  6. Construct and maintain clean conceptual taxonomies
  7. Conclusion

Introduction

The computer is like a bicycle for the mind.

-- Steve Jobs

The kingdom of heaven is like a mustard seed, the smallest of all seeds, but when it falls on prepared soil, it produces a large plant and becomes a shelter for the birds of the sky.

-- Jesus

Sigmoidal neural networks are like multi-layered logistic regression.

-- various

The threat of superintelligence is like a tribe of sparrows who find a large egg to hatch and raise.  It grows up into a great owl which devours them all.

-- Nick Bostrom (see this video)

Analogical reasoning is one of the key foundational mechanisms underlying human intelligence, and perhaps a key missing ingredient in machine intelligence.  For some - such as Douglas Hofstadter - analogy is the essence of cognition itself.[1] 

Steve Job's bicycle analogy is clever because it encapsulates the whole cybernetic idea of computers as extensions of the nervous system into a single memorable sentence using everyday terms.  

A large chunk of Jesus's known sayings are parables about the 'Kingdom of Heaven': a complex enigmatic concept that he explains indirectly through various analogies, of which the mustard seed is perhaps the most memorable.  It conveys the notions of exponential/sigmoidal growth of ideas and social movements (see also the Parable of the Leaven), while also hinting at greater future purpose.

In a number of fields, including the technical, analogical reasoning is key to creativity: most new insights come from establishing mappings between or with concepts from other fields or domains, or from generalizing existing insights/concepts (which is closely related).  These abilities all depend on deep, wide, and well organized internal conceptual maps.

In a previous post, I presented a high level working hypothesis of the brain as a biological implementation of a universal learning machine, using various familiar computational concepts as analogies to explain brain subsystems.  In my last post, I used the conceptions of unfriendly superintelligence and value alignment as analogies for market mechanism design and the healthcare problem (and vice versa).

A clever analogy is like a sophisticated conceptual compressor that helps maximize knowledge transmission.  Coming up with good novel analogies is hard because it requires compressing a complex large body of knowledge into a succinct message that heavily exploits the recipient's existing knowledgebase.  Due to the deep connections between compression, inference, intelligence and creativity, a deeper investigation of analogical reasoning is useful from a variety of angles.

It is the hard task of coming up with novel analogical connections that can lead to creative insights, but to understand that process we should start first with the mechanics of recognition.

Under the Hood

You can think of the development of IQ tests as a search for simple tests which have high predictive power for g-factor in humans, while being relatively insensitive to specific domain knowledge.  That search process resulted in a number of problem categories, many of which are based on verbal and mathematical analogies.

The image to the right is an example of a simple geometric analogy problem.  As an experiment, start a timer before having a go at it.  For bonus points, attempt to introspect on your mental algorithm.

Solving this problem requires first reducing the images to simpler compact abstract representations.  The first rows of images then become something like sentences describing relations or constraints (Z is to ? as A is to B and C is to D).  The solution to the query sentence can then be found by finding the image which best satisfies the likely analogous relations.

Imagine watching a human subject (such as your previous self) solve this problem while hooked up to a future high resolution brain imaging device.  Viewed in slow motion, you would see the subject move their eyes from location to location through a series of saccades, while various vectors or mental variable maps flowed through their brain modules.  Each fixation lasts about 300ms[2], which gives enough time for one complete feedforward pass through the dorsal vision stream and perhaps one backwards sweep.  

The output of the dorsal stream in inferior temporal cortex (TE on the bottom) results in abstract encodings which end up in working memory buffers in prefrontal cortex.  From there some sort of learned 'mental program' implements the actual analogy evaluations, probably involving several more steps in PFC, cingulate cortex, and various other cortical modules (coordinated by the Basal Ganglia and PFC). Meanwhile the eye frontal fields and various related modules are computing the next saccade decision every 300ms or so.

If we assume that visual parsing requires one fixation on each object and 50ms saccades, this suggests that solving this problem would take a typical brain a minimum of about 4 seconds (and much longer on average).  The minimum estimate assumes - probably unrealistically - that the subject can perform the analogy checks or mental rotations near instantly without any backtracking to help prime working memory.  Of course faster times are also theoretically possible - but not dramatically faster.

These types of visual analogy problems test a wide set of cognitive operations, which by itself can explain much of the correlation with IQ or g-factor: speed and efficiency of neural processing, working memory, module communication, etc.  

However once we lay all of that aside, there remains a core dependency on the ability for conceptual abstraction.  The mapping between these simple visual images and their compact internal encodings is ambiguous, as is the predictive relationship.  Solving these problems requires the ability to find efficient and useful abstractions - a general pattern recognition ability which we can relate to efficient encoding, representation learning, and nonlinear dimension reduction: the very essence of learning in both man and machine[3].

The machine learning perspective can help make these connections more concrete when we look into state of the art programs for IQ tests in general and analogy problems in particular.  Many of the specific problem subtypes used in IQ tests can be solved by relatively simple programs.  In 2003, Sange and Dowe created a simple Perl program (less than 1000 lines of code) that can solve several specific subtypes of common IQ problems[4] - but not analogies.  It scored an IQ of a little over 100, simply by excelling in a few categories and making random guesses for the remaining harder problem types.  Thus its score is highly dependent on the test's particular mix of subproblems, but that is also true for humans to some extent.  

The IQ test sub-problems that remain hard for computers are those that require pattern recognition combined with analogical reasoning and or inductive inference.  Precise mathematical inductive inference is easier for machines, whereas humans excel at natural reasoning - inference problems involving huge numbers of variables that can only be solved by scalable approximations.

For natural language tasks, neural networks have recently been used to learn vector embeddings which map words or sentences to abstract conceptual spaces encoded as vectors (typically of dimensionality 100 to 1000).  Combining word vector embeddings with some new techniques for handling multiple word senses, Wang and Gao et al just recently trained a system that can solve typical verbal reasoning problems from IQ tests (or the GRE) at upper human level - including verbal analogies[5].

The word vector embedding is learned as a component of an ANN trained via backprop on a large corpus of text data - Wikipedia.  This particular model is rather complex: it combines a multi-sense word embedding, a local sliding window prediction objective, task-specific geometric objectives, and relational regularization constraints.  Unlike the recent crop of general linguistic modeling RNNs, this particular system doesn't model full sentence structure or longer term dependencies - as those aren't necessary for answering these specific questions.  Surprisingly all it takes to solve the verbal analogy problems typical of IQ/SAT/GRE style tests are very simple geometric operations in the word vector space - once the appropriate embedding is learned.  

As a trivial example: "Uncle is to Aunt as King is to ?" literally reduces to:

Uncle + X = Aunt, King + X = ?, and thus X = Aunt-Uncle, and:

? = King + (Aunt-Uncle).

The (Aunt-Uncle) expression encapsulates the concept of 'femaleness', which can be combined with any male version of a word to get the female version.  This is perhaps the simplest example, but more complex transformations build on this same principle.  The embedded concept space allows for easy mixing and transforms of memetic sub-features to get new concepts.

Conceptual Abstractions and Cortical Maps

The success of these simplistic geometric transforms operating on word vector embeddings should not come as a huge surprise to one familiar with the structure of the brain.  The brain is extraordinarily slow, so it must learn to solve complex problems via extremely simple and short mental programs operating on huge wide vectors.  Humans (and now convolutional neural networks) can perform complex visual recognition tasks in just 10-15 individual computational steps (150 ms), or 'cortical clock cycles'.  The entire program that you used to solve the earlier visual analogy problem probably took on the order of a few thousand cycles (assuming it took you a few dozen seconds).  Einstein solved general relativity in - very roughly - around 10 billion low level cortical cycles.

The core principle behind word vector embeddings, convolutional neural networks, and the cortex itself is the same: learning to represent the statistical structure of the world by an efficient low complexity linear algebra program (consisting of local matrix vector products and per-element non-linearities).  The local wiring structure within each cortical module is equivalent to a matrix with sparse local connectivity, optimized heavily for wiring and computation such that semantically related concepts cluster close together.

(Concept mapping the cortex, from this research page)

The image above is from the paper "A Continous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain" by Huth et al.[5] They used fMRI to record activity across the cortex while subjects watched annotated video clips, and then used that data to find out roughly what types of concepts each voxel of cortex responds to.  It correctly identifies the FFA region as specializing in people-face things and the PPA as specializing in man-made objects and buildings.  A limitation of the above image visualizations is that they don't show response variance or breadth, so the voxel colors are especially misleading for lower level cortical regions that represent generic local features (such as gabor edges in V1).

The power of analogical reasoning depends entirely on the formation of efficient conceptual maps that carve reality at the joints.  The visual pathway learns a conceptual hierarchy that builds up objects from their parts: a series of hierarchical has-a relationships encoded in the connections between V1, V2, V4 and so on.  Meanwhile the semantic clustering within individual cortical maps allows for fast computations of is-a relationships through simple local pooling filters.  

An individual person can be encoded as a specific active subnetwork in the face region, and simple pooling over a local cluster of neurons across the face region can then compute the presence of a face in general.  Smaller local pooling filters with more specific shapes can then compute the presence of a female or male face, and so on - all starting from the full specific feature encoding.  

The pooling filter concept has been extensively studied in the lower levels of the visual system, where 'complex' cells higher up in V1 pool over 'simple' cell features: abstracting away gabor edges at specific positions to get edges OR'd over a range of positions (CNNs use this same technique to gain invariance to small local translations).  

This key semantic organization principle is used throughout the cortex: is-a relations and more general abstractions/invariances are computed through fast local intramodule connections that exploit the physical semantic clustering on the cortical surface, and more complex has-a relations and arbitrary transforms (ex: mapping between an eye centered coordinate basis and a body centered coordinate basis) are computed through intermodule connections (which also exploit physical clustering).

 

The Hippocampal Association Engine

The Hippocampus is a tubular seahorse shaped module located in the center of the brain, to the exterior side of the central structures (basal ganglia, thalamus).  It is the brain's associative database and search engine responsible for storing, retrieving, and consolidating patterns and declarative memories (those which we are consciously aware of and can verbally declare) over long time scales beyond the reach of short term memory in the cortex itself.

A human (or animal) unfortunate enough to suffer complete loss of hippocampal functionality basically loses the ability to form and consolidate new long term episodic and semantic memories.  They also lose more recent memories that have not yet been consolidated down the cortical hierarchy.  In rats and humans, problems in the hippocampal complex can also lead to spatial navigation impairments (forgetting current location or recent path), as the HC is used to compute and retrieve spatial map information associated with current sensory impressions (a specific instance of the HC's more general function).

In terms of module connectivity, the hippocampal complex sits on top of the cortical sensory hierarchy.  It receives inputs from a number of cortical modules, largely in the nearby associative cortex, which collectively provide a summary of the recent sensory stream and overall brain state.  The HC then has several sub circuits which further compress the mental summary into something like a compact key which is then sent into a hetero-auto-associative memory circuit to find suitable matches.  

If a good match is found, it can then cause retrieval: reactivation of the cortical subnetworks that originally formed the memory.  As the hippocampus can't know for sure which memories will be useful in the future, it tends to store everything with emphasis on the recent, perhaps as a sort of slow exponentially fading stream.  Each memory retrieval involves a new decoding and encoding to drive learning in the cortex through distillation/consolidation/retraining (this also helps prevent ontological crisis).  The amygdala is a little cap on the edge of the hippocampus which connects to the various emotion subsystems and helps estimate the importance of current memories for prioritization in the HC.

A very strong retrieval of an episodic memory causes the inner experience of reliving the past (or imagining the future), but more typical weaker retrievals (those which load information into the cortex without overriding much of the existing context) are a crucial component in general higher cognition.

In short the computation that the HC performs is that of dynamic association between the current mental pattern/state loaded into short term memory across the cortex and some previous mental pattern/state.  This is the very essence of creative insight.

Associative recall can be viewed as a type of pattern recognition with the attendant familiar tradeoffs between precision/recall or sensitivity/specificity.  At the extreme of low recall high precision the network is very conservative and risk averse: it only returns high confidence associations, maximizing precision at the expense of recall (few associations found, many potentially useful matches are lost).  At the other extreme is the over-confident crazy network which maximizes recall at the expense of precision (many associations are made, most of which are poor).  This can also be viewed in terms of the exploitation vs exploration tradeoff.

This general analogy or framework - although oversimplified - also provides a useful perspective for understanding both schizotypy and hallucinogenic drugs.  There is a large body of accumulated evidence in the form of use cases or trip reports, with a general consensus that hallucinogens can provide occasional flashes of creative insight at the expense of pushing one farther towards madness.

From a skeptical stance, using hallucinogenic drugs in an attempt to improve the mind is like doing surgery with butter-knives.  Nonetheless, careful exploration of the sanity border can help one understand more on how the mind works from the inside. 

Cannabis in particular is believed - by many of its users - to enhance creativity via occasional flashes of insight.  Most of its main mental effects: time dilation, random associations, memory impairment, spatial navigation impairment, etc appear to involve the hippocampus.  We could explain much of this as a general shift in the precision/recall tradeoff to make the hippocampus less selective.  Mainly that makes the HC just work less effectively, but it also can occasionally lead to atypical creative insights, and appears to elevate some related low level measures such as schizotypy and divergent thinking[7].  The tradeoff is one must be willing to first sift through a pile of low value random associations.

 

Cultivate memetic heterogeneity and heterozygosity

Fluid intelligence is obviously important, but in many endeavors net creativity is even more important.  

Of all the components underlying creativity, improving the efficiency of learning, the quality of knowledge learned, and the organizational efficiency of one's internal cortical maps are probably the most profitable dimensions of improvement: the low hanging fruits.

Our learning process is largely automatic and subconscious : we do not need to teach children how to perceive the world.  But this just means it takes some extra work to analyze the underlying machinery and understand how to best utilize it.

Over long time scales humanity has learned a great deal on how to improve on natural innate learning: education is more or less learning-engineering.  The first obvious lesson from education is the need for curriculum: acquiring concepts in stages of escalating complexity and order-dependency (which of course is already now increasingly a thing in machine learning).

In most competitive creative domains, formal education can only train you up to the starting gate.  This of course is to be expected, for the creation of novel and useful ideas requires uncommon insights.

Memetic evolution is similar to genetic evolution in that novelty comes more from recombination than mutation.  We can draw some additional practical lessons from this analogy: cultivate memetic heterogeneity and heterozygosity.

The first part - cultivate memetic heterogeneity - should be straightforward, but it is worth examining some examples.  If you possess only the same baseline memetic population as your peers, then the chances of your mind evolving truly novel creative combinations are substantially diminished.  You have no edge - your insights are likely to be common.

To illustrate this point, let us consider a few examples:

Geoffrey Hinton is one of the most successful researchers in machine learning - which itself is a diverse field.  He first formally studied psychology, and then artificial intelligence.  His various 200 research publications integrate ideas from statistics, neuroscience and physics.  His work on boltzmann machines and variants in particular imports concepts from statistical physics whole cloth.

Before founding DeepMind (now one of the premier DL research groups in the world), Demis Hassabis studied the brain and hippocampus in particular at the Gatsby Computational Neuroscience Unit, and before that he worked for years in the video game industry after studying computer science.

Before the Annus Mirabilis, Einstein worked at the patent office for four years, during which time he was exposed to a large variety of ideas relating to the transmission of electric signals and electrical-mechanical synchronization of time, core concepts which show up in his later thought experiments.[8]

Creative people also tend to have a diverse social circle of creative friends to share and exchange ideas across fields.

Genetic heterozygosity is the quality of having two different alleles at a gene locus; summed over the organism this leads to a different but related concept of diversity.

Within developing fields of knowledge we often find key questions or subdomains for which there are multiple competing hypotheses or approaches.  Good old fashioned AI vs Connectionism, Ray tracing vs Rasterization, and so on.

In these scenarios, it is almost always better to understand both viewpoints or knowledge clusters - at least to some degree.  Each cluster is likely to have some unique ideas which are useful for understanding the greater truth or at the very least for later recombination.  

This then is memetic heterozygosity.  It invokes the Jain version of the blind men and the elephant.

Construct and maintain clean conceptual taxonomies

Formal education has developed various methods and rituals which have been found to be effective through a long process of experimentation.  Some of these techniques are still quite useful for autodidacts.

When one sets out to learn, it is best to start with a clear goal.  The goal of high school is just to provide a generalist background.  In college one then chooses a major suitable for a particular goal cluster: do you want to become a computer programmer? a physicist? a biologist? etc.  A significant amount of work then goes into structuring a learning curriculum most suitable for these goal types.

Once out of the educational system we all end up creating our own curriculums, whether intentionally or not.  It can be helpful to think strategically as if planning a curriculum to suit one's longer term goals.

For example, about four years ago I decided to learn how the brain works and how AGI could be built in particular.  When starting on this journey, I had a background mainly in computer graphics, simulation, and game related programming.  I decided to focus about equally on mainstream AI, machine learning, computational neuroscience, and the AGI literature.  I quickly discovered that my statistics background was a little weak, so I had to shore that up.  Doing it all over again I may have started with a statistics book.  Instead I started with AI: a modern approach (of course I mostly learn from the online research literature).

Learning works best when it is applied.  Education exploits this principle and it is just as important for autodidactic learning.  The best way to learn many math or programming concepts is learning by doing, where you create reasonable subtasks or subgoals for yourself along the way.  

For general knowledge, application can take the form of writing about what you have learned.  Academics are doing this all the time as they write papers and textbooks, but the same idea applies outside of academia.

In particular a good exercise is to imagine that you need to communicate all that you have learned about the domain.  Imagine that you are writing a textbook or survey paper for example, and then you need to compress all that knowledge into a summary chapter or paper, and then all of that again down into an abstract.  Then actually do write up a summary - at least in the form of a blog post (even if you don't show it to anybody).

The same ideas apply on some level to giving oral presentations or just discussing what you have learned informally - all of which are also features of the academic learning environment.

Early on, your first attempts to distill what you have learned into written form will be ... poor.  But doing this process forces you to attempt to compress what you have learned, and thus it helps encourage the formation of well structured concept maps in the cortex.

A well structured conceptual map can be thought of as a memetic taxonomy.  The point of a taxonomy is to organize all the invariances and 'is-a' relationships between objects so that higher level inferences and transformations can generalize well across categories.  

Explicitly asking questions which probe the conceptual taxonomy can help force said structure to take form.  For example in computer science/programming the question: "what is the greater generalization of this algorithm?" is a powerful tool.

In some domains, it may even be possible to semi-automate or at least guide the creative process using a structured method.

For example consider sci-fi/fantasy genre novels.  Many of the great works have a general analogical structure based on real history ported over into a more exotic setting.  The foundation series uses the model of the fall of the roman empire.  Dune is like Lawrence of Arabia in space.  Stranger in a Strange Land is like the Mormon version of Jesus the space alien, but from Mars instead of Kolob.  A Song of Fire and Ice is partly a fantasy port of the war of the roses.  And so on.

One could probably find some new ideas for novels just by creating and exploring a sufficiently large table of historical events and figures and comparing it to a map of the currently colonized space of ideas.  Obviously having an idea for a novel is just the tiniest tip of the iceberg in the process, but a semi-formal method is interesting nonetheless for brainstorming and applies across domains (others have proposed similar techniques for generating startup ideas, for example).

Conclusion

We are born equipped with sophisticated learning machinery and yet lack innate knowledge on how to use it effectively - for this too we must learn.

The greatest constraint on creative ability is the quality of conceptual maps in the cortex.  Understanding how these maps form doesn't automagically increase creativity, but it does help ground our intuitions and knowledge about learning, and could pave the way for future improved techniques.

In the meantime: cultivate memetic heterogeneity and heterozygosity, create a learning strategy, develop and test your conceptual taxonomy, continuously compress what you learn by writing and summarizing, and find ways to apply what you learn as you go.

MIRI Fundraiser: Why now matters

24 So8res 24 July 2015 10:38PM

Our summer fundraiser is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. Previous posts in the series are listed at the above link.


I'm often asked whether donations to MIRI now are more important than donations later. Allow me to deliver an emphatic yes: I currently expect that donations to MIRI today are worth much more than donations to MIRI in five years. As things stand, I would very likely take $10M today over $20M in five years.

That's a bold statement, and there are a few different reasons for this. First and foremost, there is a decent chance that some very big funders will start entering the AI alignment field over the course of the next five years. It looks like the NSF may start to fund AI safety research, and Stuart Russell has already received some money from DARPA to work on value alignment. It's quite possible that in a few years' time significant public funding will be flowing into this field.

(It's also quite possible that it won't, or that the funding will go to all the wrong places, as was the case with funding for nanotechnology. But if I had to bet, I would bet that it's going to be much easier to find funding for AI alignment research in five years' time).

In other words, the funding bottleneck is loosening — but it isn't loose yet.

We don't presently have the funding to grow as fast as we could over the coming months, or to run all the important research programs we have planned. At our current funding level, the research team can grow at a steady pace — but we could get much more done over the course of the next few years if we had the money to grow as fast as is healthy.

Which brings me to the second reason why funding now is probably much more important than funding later: because growth now is much more valuable than growth later.

There's an idea picking up traction in the field of AI: instead of focusing only on increasing the capabilities of intelligent systems, it is important to also ensure that we know how to build beneficial intelligent systems. Support is growing for a new paradigm within AI that seriously considers the long-term effects of research programs, rather than just the immediate effects. Years down the line, these ideas may seem obvious, and the AI community's response to these challenges may be in full swing. Right now, however, there is relatively little consensus on how to approach these issues — which leaves room for researchers today to help determine the field's future direction.

People at MIRI have been thinking about these problems for a long time, and that puts us in an unusually good position to influence the field of AI and ensure that some of the growing concern is directed towards long-term issues in addition to shorter-term ones. We can, for example, help avert a scenario where all the attention and interest generated by Musk, Bostrom, and others gets channeled into short-term projects (e.g., making drones and driverless cars safer) without any consideration for long-term risks that are more vague and less well-understood.

It's likely that MIRI will scale up substantially at some point; but if that process begins in 2018 rather than 2015, it is plausible that we will have already missed out on a number of big opportunities.

The alignment research program within AI is just now getting started in earnest, and it may even be funding-saturated in a few years' time. But it's nowhere near funding-saturated today, and waiting five or ten years to begin seriously ramping up our growth would likely give us far fewer opportunities to shape the methodology and research agenda within this new AI paradigm. The projects MIRI takes on today can make a big difference years down the line, and supporting us today will drastically affect how much we can do quickly. Now matters.

I encourage you to donate to our ongoing fundraiser if you'd like to help us grow!


This post is cross-posted from the MIRI blog.

Steelmaning AI risk critiques

23 Stuart_Armstrong 23 July 2015 10:01AM

At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.

EDIT: Thanks for all the contribution! Keep them coming...

There is no such thing as strength: a parody

23 ZoltanBerrigomo 05 July 2015 11:44PM

The concept of strength is ubiquitous in our culture. It is commonplace to hear one person described as "stronger" or "weaker" than another. And yet the notion of strength is a a pernicious myth which reinforces many our social ills and should be abandoned wholesale. 

 

1. Just what is strength, exactly? Few of the people who use the word can provide an exact definition. 

On first try, many people would say that  strength is the ability to lift heavy objects. But this completely ignores the strength necessary to push or pull on objects; to run long distances without exhausting oneself; to throw objects with great speed; to balance oneself on a tightrope, and so forth. 

When this is pointed out, people often try to incorporate all of these aspects into the definition of strength, with a result that is long, unwieldy, ad-hoc, and still missing some acts commonly considered to be manifestations of strength. 

 

Attempts to solve the problem by referring to the supposed cause of strength -- for example, by saying that strength is just a measure of  muscle mass -- do not help. A person with a large amount of muscle mass may be quite weak on any of the conventional measures of strength if, for example, they cannot lift objects due to injuries or illness. 

 

 

2. The concept of strength has an ugly history. Indeed, strength is implicated in both sexism and racism. Women have long been held to be the "weaker sex," consequently needing protection from the "stronger" males, resulting in centuries of structural oppression. Myths about racialist differences in strength have informed pernicious stereotypes and buttressed inequality.

 

3. There is no consistent way of grouping people into strong and weak. Indeed, what are we to make of the fact that some people are good at running but bad at lifting and vice versa? 

 

One might think that we can talk about different strengths - the strength in one's arms and one's legs for example. But what, then, should we make of the person who is good at arm-wrestling but poor at lifting? Arms can move in many ways; what will we make of someone who can move arms one way with great force, but not another? It is not hard to see that potential concepts such as "arm strength" or "leg strength" are problematic as well. 

 

4. When people are grouped into strong and weak according to any number of criteria, the amount of variation within each group is far larger than the amount of variation between groups. 

 

5. Strength is a social construct. Thus no one is inherently weak or strong. Scientifically, anthropologically, we are only human

 

6. Scientists are rapidly starting to understand the illusory nature of strength, and one needs only to glance at any of the popular scientific periodicals to encounter refutations of this notion. 

 

In on experiment, respondents from two different cultures were asked to lift a heavy object as much as they could. In one of the cultures, the respondents lifted the object higher. Furthermore, the manner in which the respondents attempted to lift the object depended on the culture. This shows that tests of strength cannot be considered culture-free and that there may be no such thing as a universal test of strength

 

7. Indeed, to even ask "what is strength?" is to assume that there is a quality, or essence, of humans with essential, immutable qualities. Asking the question begins the process of reifying strength... (see page 22 here).

 

---------------------------------------

 

For a serious statement of what the point of this was supposed to be, see this comment

 

In praise of gullibility?

23 ahbwramc 18 June 2015 04:52AM

I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:

When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable.

He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).

This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.

[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]

All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:

"Don't be gullible."

Huh. Sounds a lot more obvious that way.

Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?

Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.

There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.

Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.

This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?

That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?

I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.

I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.

Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.

(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)

Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?

A Proposal for Defeating Moloch in the Prison Industrial Complex

23 lululu 02 June 2015 10:03PM

Summary

I'd like to increasing the well-being of those in the justice system while simultaneously reducing crime. I'm missing something here but I'm not sure what. I'm thinking this may be a worse idea than I originally thought based on comment feedback, though I'm still not 100% sure why this is the case.

Current State

While the prison system may not constitute an existential threat, At this moment more than 2,266,000 adults are incarcerated in the US alone, and I expect that being in prison greatly decreases QALYs for those incarcerated, that further QALYs are lost to victims of crime, family members of the incarcerated, and through the continuing effects of institutionalization and PTSD from sentences served in the current system, not to mention the brainpower and man-hours lost to any productive use.


If you haven't read these Meditations on Moloch, I highly recommend it. It’s long though, so the executive summary is: Moloch is the personification of the forces of competition which perverse incentives, a "race to the bottom" type situation where all human values are discarded in an effort to survive. That this can be solved with better coordination, but it is very hard to coordinate when perverse incentives also penalize the coordinators and reward dissenters. The prison industrial complex is an example of these perverse incentives. No one thinks that the current system is ideal but incentives prevent positive change and increase absolute unhappiness.

 

  • Politicians compete for electability. Convicts can’t vote, prisons make campaign contributions and jobs, and appearing “tough on crime” appeals to a large portion of the voter base.
  • Jails compete for money: the more prisoners they house, the more they are paid and the longer they can continue to exist. This incentive is strong for public prisons and doubly strong for private prisons.
  • Police compete for bonuses and promotions, both of which are given as rewards to cops who bring in and convict more criminals
  • Many of the inmates themselves are motivated to commit criminal acts by the small number of non-criminal opportunities available to them for financial success, besides criminal acts. After becoming a criminal, this number of opportunities is further narrowed by background checks.

 

The incentives have come far out of line with human values. What can be done to bring incentives back in alignment with the common good?

My Proposal

Using a model that predicts recidivism at sixty days, one year, three years, and five years, predict the expected recidivism rate for all inmates at all individual prison given average recidivism. Sixty days after release, if recidivism is below the predicted rate, the prison gets a small sum of money equaling 25% of the predicted cost to the state of dealing with the predicted recidivism (including lawyer fees, court fees, and jailing costs). This is repeated at one year, three years, and five years.


The statistical models would be readjusted with current data every years, so if this model causes recidivism to drop across the board, jails would be competing against ever higher standard, competing to create the most innovative and groundbreaking counseling and job skills and restorative methods so that they don’t lose their edge against other prisons competing for the same money. As it becomes harder and harder to edge out the competition’s advanced methods, and as the prison population is reduced, additional incentives could come by ending state contracts with the bottom 10% of prisons, or with any prisons who have recidivism rates larger than expected for multiple years in a row.

 

Note that this proposal makes no policy recommendations or value judgement besides changing the incentive structure. I have opinions on the sanity of certain laws and policies and the private prison system itself, but this specific proposal does not. Ideally, this will reduce some amount of partisan bickering.


Using this added success incentive, here are the modified motivations of each of the major actors.

 

  • Politicians compete for electability. Convicts still can’t vote, prisons make campaign contributions, and appearing “tough on crime” still appeals to a large portion of the voter base. The politician can promise a reduction in crime without making any specific policy or program recommendations, thus shielding themselves from criticism of being soft on crime that might come from endorsing restorative justice or psychological counselling, for instance. They get to claim success for programs that other people, are in charge of administrating and designing. Further, they are saving 75% of the money predicted to have have been spent administrating criminals. Prisons love getting more money for doing the same amount of work so campaign contributions would stay stable or go up for politicians who support reduced recidivism bonuses.
  • Prisons compete for money. It costs the state a huge amount of money to house prisoners, and the net profit from housing a prisoner is small after paying for food, clothing, supervision, space, repairs, entertainment, ect. An additional 25% of that cost, with no additional expenditures is very attractive. I predict that some amount of book-cooking will happen, but that the gains possible with book cooking are small compared to gains from actual improvements in their prison program. Small differences in prisons have potential to make large differences in post-prison behavior. I expect having an on-staff CBT psychiatrist would make a big difference; an addiction specialist would as well. A new career field is born: expert consultants who travel from private prison to private prison and make recommendations for what changes would reduce recidivism at the lowest possible cost.
  • Police and judges retain the same incentives as before, for bonuses, prestige, and promotions. This is good for the system, because if their incentives were not running counter to the prisons and jails, then there would be a lot of pressure to cook the books by looking the other way on criminals til after the 60 day/1 year/5 year mark. I predict that there will be a couple scandals of cops found to be in league with prisons for a cut of the bonus, but that this method isn’t very profitable. For one thing, an entire police force would have to be corrupt and for another, criminals are mobile and can commit crimes in other precincts. Police are also motivated to work in safer areas, so the general program of rewarding reduced recidivism is to their advantage.

 

Roadmap

If it could be shown that a model for predicting recidivism is highly predictive, we will need to create another model to predict how much the government could save if switching to a bonus system, and what reduction of crime could be expected.


Halfway houses in Pennsylvania are already receiving non-recidivism bonuses. Is a pilot project using this pricing structure feasible?

Giving What We Can needs your help!

23 RobertWiblin 29 May 2015 04:30PM

As you probably know, Giving What We Can exists to move donations to the charities that can most effectively help others. Our members take a pledge to give 10% of their incomes for the rest of their life to the most impactful charities. Along with other extensive resources for donors such as GiveWell and OpenPhil, we produce and communicate, in an accessible way, research to help members determine where their money will do the most good. We also impress upon members and the general public the vast differences between the best charities and the rest.

Many LessWrongers are members or supporters, including of course the author of Slate Star Codex. We also recently changed our pledge so that people could give to whichever cause they felt best helped others, such as existential risk reduction or life extension, depending on their views. Many new members now choose to do this.

What you might not know is that 2014 was a fantastic year for us - our rate of membership growth more than tripled! Amazingly, our 1066 members have now pledged over $422 million, and already given over $2 million to our top rated charities. We've accomplished this on a total budget of just $400,000 since we were founded. This new rapid growth is thanks to the many lessons we have learned by trial and error, and the hard work of our team of staff and volunteers.

To make it to the end of the year we need to raise just another £110,000. Most charities have a budget in the millions or tens of millions of pounds and we do what we do with a fraction of that.

We want to raise the money as quickly as possible, so that our staff can stop focusing on fundraising (which takes up a considerable amount of energy), and get back to the job of growing our membership.

Some of our supporters are willing to sweeten the deal as well: if you haven't given us more than £1,000 before, then they'll match 1:1 a gift between £1,000 and £5,000.

You can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for our bank details. Info on tax deductible giving from the USA and non-UK Europe are also available on our website.

What we are doing this year

The second half of this year is looking like it will be a very exciting for us. Four books about effective altruism are being released this year, including one by our own trustee William MacAskill, which will be heavily promoted in the US and UK. The Effective Altruism Summit is also turning into 'EA Global' with events at Google Headquarters in San Francisco, Oxford University and Melbourne, headlined by Elon Musk.

Tens, if not hundreds of thousands of people will be finding out about our philosophy of effective giving for the first time.

To do these opportunities justice Giving What We Can needs to expand its staff to support its rapidly growing membership and local chapters, and ensure we properly follow up with all prospective members. We want to take people who are starting to think about how they can best make the world a better place, and encourage them to make a serious long-term commitment to effective giving, and help them discover where their money can do the most good.

Looking back at our experience over the last five years, we estimate that each $1 given to Giving What We Can has already moved $6, and will likely end up moving between $60 and $100 to the most effective charities in the world. (This are time discounted, counterfactual donations, only to charities we regard very highly. Check out this report for more details.)

This represents a great return on investment, and I would be very sad if we couldn't take these opportunities just because we lacked the necessary funding.

Our marginal hire

If we don't raise this money we will not have the resources to keep on our current Director of Communications. He has invaluable experience as a Communications Director for several high-profile Australian politicians, which has given him skills in web-development, public relations, graphic design, public speaking and social media. Amongst the things he has already achieved in his three months here are: automation of the book-keeping on our Trust (saving huge amounts of time and minimising errors), very much improved our published materials including our fundraising prospectus, written a press release and planned a media push to capitalise on our getting to 1,000 members and Peter Singer’s book release in the UK.

His wide variety of skills mean that there are a large number of projects he would be capable of doing which would increase our member growth, and we are keen for him to test a number of these. His first project would be to optimise our website to make the most of the increased attention effective altruism will be generating over the summer and turn that into people actually donating 10% of their incomes to the most effective causes. In the past we have had trouble finding someone with such a broad set of crucial skills. Combined with how swiftly and well he has integrated into our team, it would be a massive loss to have to let him go and later down the line need to try to recruit a replacement.

As I wrote earlier you can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for bank details or personalised advice on how to give best. If you need tax deductibility in another country check these pages on the USA and non-UK Europe.

I'm happy to take questions here or by email!

Examples of AI's behaving badly

22 Stuart_Armstrong 16 July 2015 10:01AM

Some past examples to motivate thought on how AI's could misbehave:

An algorithm pauses the game to never lose at Tetris.

In "Learning to Drive a Bicycle using Reinforcement Learning and Shaping", Randlov and Alstrom, describes a system that learns to ride a simulated bicycle to a particular location. To speed up learning, they provided positive rewards whenever the agent made progress towards the goal. The agent learned to ride in tiny circles near the start state because no penalty was incurred from riding away from the goal.

A similar problem occurred with a soccer-playing robot being trained by David Andre and Astro Teller (personal communication to Stuart Russell). Because possession in soccer is important, they provided a reward for touching the ball. The agent learned a policy whereby it remained next to the ball and “vibrated,” touching the ball as frequently as possible. 

Algorithms claiming credit in Eurisko: Sometimes a "mutant" heuristic appears that does little more than continually cause itself to be triggered, creating within the program an infinite loop. During one run, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of new concepts, located those with the highest Worth values, and inserted its name in their My Creator slots.

A Federal Judge on Biases in the Criminal Justice System.

22 Costanza 03 July 2015 03:17AM

A well-known American federal appellate judge, Alex Kozinski, has written a commentary on systemic biases and institutional myths in the criminal justice system.

The basic thrust of his criticism will be familiar to readers of the sequences and rationalists generally. Lots about cognitive biases (but some specific criticisms of fingerprints and DNA evidence as well). Still, it's interesting that a prominent federal judge -- the youngest when appointed, and later chief of the Ninth Circuit -- would treat some sacred cows of the judiciary so ruthlessly. 

This is specifically a criticism of U.S. criminal justice, but, ceteris paribus, much of it applies not only to other areas of U.S. law, but to legal practices throughout the world as well.

Six Ways To Get Along With People Who Are Totally Wrong*

22 RobertWiblin 27 May 2015 12:37PM

This is a re-post of something I wrote for the Effective Altruism Forum. Though most of the ideas have been raised here before, perhaps many times, I thought it might still be of interest as a brief presentation of them all!

--

* The people you think are totally wrong may not actually be totally wrong.

Effective altruism is a ‘broad tent’

As is obvious to anyone who has looked around here, effective altruism is based more on a shared interest in the question 'how can you do the most good' than a shared view on the answer. We all have friends who support:

  • A wide range of different cause areas.
  • A wide range of different approaches to those causes.
  • Different values and moral philosophies regarding what it means to 'help others'.
  • Different political views on how best to achieve even shared goals. On economic policy for example, we have people covering the full range from far left to far right. In the CEA offices we have voters for every major political party, and some smaller ones too.

Looking beyond just stated beliefs, we also have people with a wide range of temperaments, from highly argumentative, confident and outspoken to cautious, idiosyncratic and humble.

Our wide range of views could cause problems

There is a popular saying that 'opposites attract'. But unfortunately, social scientists have found precisely the opposite to be true: birds of a feather do in fact flock together.

One of the drivers of this phenomenon is that people who are different are more likely to get into conflicts with one another. If my partner and I liked to keep the house exactly the same way, we certainly wouldn't have as many arguments about cleaning (I'll leave you to speculate about who is the untidy one!). People who are different from you may initially strike you as merely amusing, peculiar or mistaken, but when you talk to them at length and they don't see reason, you may start to see them as stupid, biased, rude, impossible to deal with, unkind, and perhaps even outright bad people.

A movement brought together by a shared interest in the question ‘what should we do?’ will inevitably have a greater diversity of priorities, and justifications for those priorities, than a movement united by a shared answer. This is in many ways our core strength. Maintaining a diversity of views means we are less likely to get permanently stuck on the wrong track, because we can learn from one another's scholarship and experiences, and correct course if necessary.

However, it also means we are necessarily committed to ideological pluralism. While it is possible to maintain ‘Big Tent’ social movements they face some challenges. The more people hold opinions that others dislike, the more possible points of friction there are that can cause us to form negative opinions of one another. There have already been strongly worded exchanges online demonstrating the risk.

When a minority holds an unpopular view they can feel set upon and bullied, while the majority feels mystified and frustrated that a small group of people can't see the obvious truth that so many accept.

My first goal with this post is to make us aware of this phenomenon, and offer my support for a culture of peaceful coexistence between people who, even after they share all their reasons and reflect, still disagree.

My second goal is to offer a few specific actions that can help us avoid interpersonal conflicts that don't contribute to making the world a better place:

1. Remember that you might be wrong

Hard as it is to keep in mind when you're talking to someone who strongly disagrees with you, it is always possible that they have good points to make that would change your mind, at least a bit. Most claims are only ‘partially true or false’, and there is almost always something valuable you can learn from someone who disagrees with you, even if it is just an understanding of how they think.

If the other person seems generally as intelligent and informed about the topic as you, it's not even clear why you should give more weight to your own opinion than theirs.

2. Be polite, doubly so if your partner is not

Being polite will make both the person you are talking to, and onlookers, more likely to come around to your view. It also means that you're less likely to get into a fight that will hurt others and absorb your precious time and emotional energy.

Politeness has many components, some notable ones being: not criticising someone personally; interpreting their behaviour and statements in a fairly charitable way; not being a show-off, or patronising and publicly embarrassing others; respecting others as your equals, even if you think they are not; conceding when they have made a good point; and finally keeping the conversation focussed on information that can be shared, confirmed, and might actually prove persuasive.

3. Don't infer bad motivations

While humans often make mistakes in their thinking, it's uncommon for them to be straight out uninterested in the welfare of others or what is right, especially so in this movement. Even if they are, they are probably not aware that that is the case. And even if they are aware, you won't come across well to onlookers by addressing them as though they have bad motivations.

If you really do become convinced the person you are talking to is speaking in bad faith, it's time to walk away. As they say: don't feed the trolls.

4. Stay cool

Even when people say things that warrant anger and outrage, expressing anger or outrage publicly will rarely make the world a better place. Anger being understandable or natural is very different from it being useful, especially if the other person is likely to retaliate with anger of their own.

Being angry does not improve the quality of your thinking, persuade others that you're right, make you happier or more productive, or make for a more harmonious community.

In its defence, anger can be highly motivating. Unfortunately it is indiscriminate about motivating you to do very valuable, ineffective and even harmful things.

Any technique that can keep you calm is therefore useful. If something is making you unavoidably angry, it's typically best to walk away and let other people deal with it.

5. Pick your battles

Not all things are equally important to reach a consensus about. For good or ill, most things we spend our days talking about just aren't that 'action relevant'. If you find yourself edging towards interpersonal conflict on a question that i) isn't going to change anyone's actions much; ii) isn't going to make the world a much better place, even if it does change their actions; or iii) is very hard to persuade others about, maybe it isn't worth the cost of interpersonal tension to explore in detail.

So if someone in the community says something unrelated or peripheral to effective altruism that you disagree with, which could develop into a conflict, you always have the option of not taking the bait. In a week, you and they may not even remember it was mentioned, let alone consider it worth damaging your relationship over.

6. Let it go

The most important advice of all.

Perhaps you are discussing something important. Perhaps you've made great arguments. Perhaps everyone you know agrees with you. You've been polite, and charitable, and kept your cool. But the person you're talking to still holds a view you strongly disagree with and believe is harmful.

If that's the case, it's probably time for you both to walk away before your opinions of one another fall too far, or the disagreement spirals into sectarianism. If someone can't be persuaded, you can at least avoid creating an ill-will between you that ensures they never come around. You've done what you can for now, and that is enough.

Hopefully time will show which of you is right, or space away from a public debate will give one of you the chance to change your mind in private without losing face. In the meantime maybe you can't work closely together, but you can at least remain friendly and respectful.

It isn't likely or even desirable for us to end up agreeing with one another on everything. The world is a horribly complex place; if the questions we are asking had easy answers the research we are doing wouldn't be necessary in the first place.

The cost of being part of a community that accepts and takes an interest in your views, even though many think you are pulling in the wrong direction, is to be tolerant of others in the same way even when you think their views are harmful.

So, sometimes, you just have to let it go.

--

PS

If you agree with me about the above, you might be tempted to post or send it to people every time they aren’t playing by these rules. Unfortunately, this is likely to be counterproductive and lead to more conflict rather than less. It’s useful to share this post in general, but not trot it out as a way of policing others. The most effective way to promote this style of interaction is to exemplify it in the way you treat others, and not get into long conversations with people who have less productive ways of talking to others.

Thanks to Amanda, Will, Diana, Michelle, Catriona, Marek, Niel, Tonja, Sam and George for feedback on drafts of this post.

Crazy Ideas Thread

21 Gunnar_Zarncke 07 July 2015 09:40PM

This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them.

This thread itself is such an idea. Or rather the tangent of such an idea which I post below as a seed for this thread.

 

Rules for this thread:

  1. Each crazy idea goes into its own top level comment and may be commented there.
  2. Voting should be based primarily on how original the idea is.
  3. Meta discussion of the thread should go to the top level comment intended for that purpose. 

 


If this should become a regular thread I suggest the following :

  • Use "Crazy Ideas Thread" in the title.
  • Copy the rules.
  • Add the tag "crazy_idea".
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be ideas or similar'
  • Add a second top-level comment with an initial crazy idea to start participation.

[Link] Persistence of Long-Term Memory in Vitrified and Revived C. elegans worms

21 Rangi 24 May 2015 03:43AM

http://online.liebertpub.com/doi/pdf/10.1089/rej.2014.1636

This is a paper published in 2014 by Natasha Vita-More and Daniel Barranco, both associated with the Alcor Research Center (ARC).

The abstract:

Can memory be retained after cryopreservation? Our research has attempted to answer this long-standing question by using the nematode worm Caenorhabditis elegans (C. elegans), a well-known model organism for biological research that has generated revolutionary findings but has not been tested for memory retention after cryopreservation. Our study’s goal was to test C. elegans’ memory recall after vitrification and reviving. Using a method of sensory imprinting in the young C. elegans we establish that learning acquired through olfactory cues shapes the animal’s behavior and the learning is retained at the adult stage after vitrification. Our research method included olfactory imprinting with the chemical benzaldehyde (C₆H₅CHO) for phase-sense olfactory imprinting at the L1 stage, the fast cooling SafeSpeed method for vitrification at the L2 stage, reviving, and a chemotaxis assay for testing memory retention of learning at the adult stage. Our results in testing memory retention after cryopreservation show that the mechanisms that regulate the odorant imprinting (a form of long-term memory) in C. elegans have not been modified by the process of vitrification or by slow freezing.

Calling all Nigerian rationalists and effective altruists

21 oge 03 May 2015 10:31PM

I'm in Lagos, Nigeria till the end of May and I'd like to hold a LessWrong/EA meetup while I'm here. If you'll ever be in the country in the future (or in the subcontinent), please get in touch so we can coordinate a meetup. I'd also appreciate being put in contact with any Nigerians who may not regularly read this list.

My e-mail address is oge@nnadi.org. I hope to hear from you.

European Community Weekend 2015 Impressions Thread

19 Gunnar_Zarncke 14 June 2015 08:21PM

The European Community Weekend in Berlin is over and was plain awesome.

This is no complete report of the event but a place where you can e.g. comment on the event, link to photos or what else you want to share.

I'm not the organizer of the Meetup but I have been there and for me it was the most grand experience since last years European Community Weekend. Meeting so many energetic, compassionate and in general awesome people - some from last year or many new. Great presentations and workshops. And such a positive and open athmosphere.

Cheers to all participants!

See also the Facebook Group for the Community Event

Surprising examples of non-human optimization

19 Jan_Rzymkowski 14 June 2015 05:05PM

I am very much interested in examples of non-human optimization processes producing working, but surprising solutions. What is most fascinating is how they show human approach is often not the only one and much more alien solutions can be found, which humans are just not capable of conceiving. It is very probable, that more and more such solutions will arise and will slowly make big part of technology ununderstandable by humans.

I present following examples and ask for linking more in comments:

1. Nick Bostrom describes efforts in evolving circuits that would produce oscilloscope and frequency discriminator, that yielded very unorthodox designs:
http://www.damninteresting.com/on-the-origin-of-circuits/
http://homepage.ntlworld.com/r.stow1/jb/publications/Bird_CEC2002.pdf (IV. B. Oscillator Experiments; also C. and D. in that section)

2. Algorithms learns to play NES games with some eerie strategies:
https://youtu.be/qXXZLoq2zFc?t=361 (description by Vsause)
http://hackaday.com/2013/04/14/teaching-a-computer-to-play-mario-seemingly-through-voodoo/ (more info)

3. Eurisko finding unexpected way of winning Traveller TCS stratedy game:
http://aliciapatterson.org/stories/eurisko-computer-mind-its-own
http://www.therpgsite.com/showthread.php?t=14095

[Link] Nate Soares is answering questions about MIRI at the EA Forum

19 RobbBB 11 June 2015 12:27AM

Nate Soares, MIRI's new Executive Director, is going to be answering questions tomorrow at the EA Forum (link). You can post your questions there now; he'll start replying Thursday, 15:00-18:00 US Pacific time.

Quoting Nate:

Last week Monday, I took the reins as executive director of the Machine Intelligence Research Institute. MIRI focuses on studying technical problems of long-term AI safety. I'm happy to chat about what that means, why it's important, why we think we can make a difference now, what the open technical problems are, how we approach them, and some of my plans for the future.

I'm also happy to answer questions about my personal history and how I got here, or about personal growth and mindhacking (a subject I touch upon frequently in my blog, Minding Our Way), or about whatever else piques your curiosity.

Nate is a regular poster on LessWrong under the name So8res -- you can find stuff he's written in the past here.


 

Update: Question-answering is live!

Update #2: Looks like Nate's wrapping up now. Feel free to discuss the questions and answers, here or at the EA Forum.

Update #3: Here are some interesting snippets from the AMA:

 


Alex Altair: What are some of the most neglected sub-tasks of reducing existential risk? That is, what is no one working on which someone really, really should be?

Nate Soares: Policy work / international coordination. Figuring out how to build an aligned AI is only part of the problem. You also need to ensure that an aligned AI is built, and that’s a lot harder to do during an international arms race. (A race to the finish would be pretty bad, I think.)

I’d like to see a lot more people figuring out how to ensure global stability & coordination as we enter a time period that may be fairly dangerous.


Diego Caleiro: 1) Which are the implicit assumptions, within MIRI's research agenda, of things that "currently we have absolutely no idea of how to do that, but we are taking this assumption for the time being, and hoping that in the future either a more practical version of this idea will be feasible, or that this version will be a guiding star for practical implementations"? [...]

2) How do these assumptions diverge from how FLI, FHI, or non-MIRI people publishing on the AGI 2014 book conceive of AGI research?

3) Optional: Justify the differences in 2 and why MIRI is taking the path it is taking.

Nate Soares: 1) The things we have no idea how to do aren't the implicit assumptions in the technical agenda, they're the explicit subject headings: decision theory, logical uncertainty, Vingean reflection, corrigibility, etc :-)

We've tried to make it very clear in various papers that we're dealing with very limited toy models that capture only a small part of the problem (see, e.g., basically all of section 6 in the corrigibility paper).

Right now, we basically have a bunch of big gaps in our knowledge, and we're trying to make mathematical models that capture at least part of the actual problem -- simplifying assumptions are the norm, not the exception. All I can easily say that common simplifying assumptions include: you have lots of computing power, there is lots of time between actions, you know the action set, you're trying to maximize a given utility function, etc. Assumptions tend to be listed in the paper where the model is described.

2) The FLI folks aren't doing any research; rather, they're administering a grant program. Most FHI folks are focused more on high-level strategic questions (What might the path to AI look like? What methods might be used to mitigate xrisk? etc.) rather than object-level AI alignment research. And remember that they look at a bunch of other X-risks as well, and that they're also thinking about policy interventions and so on. Thus, the comparison can't easily be made. (Eric Drexler's been doing some thinking about the object-level FAI questions recently, but I'll let his latest tech report fill you in on the details there. Stuart Armstrong is doing AI alignment work in the same vein as ours. Owain Evans might also be doing object-level AI alignment work, but he's new there, and I haven't spoken to him recently enough to know.)

Insofar as FHI folks would say we're making assumptions, I doubt they'd be pointing to assumptions like "UDT knows the policy set" or "assume we have lots of computing power" (which are obviously simplifying assumptions on toy models), but rather assumptions like "doing research on logical uncertainty now will actually improve our odds of having a working theory of logical uncertainty before it's needed."

3) I think most of the FHI folks & FLI folks would agree that it's important to have someone hacking away at the technical problems, but just to make the arguments more explicit, I think that there are a number of problems that it's hard to even see unless you have your "try to solve FAI" goggles on. [...]

We're still in the preformal stage, and if we can get this theory to the formal stage, I expect we may be able to get a lot more eyes on the problem, because the ever-crawling feelers of academia seem to be much better at exploring formalized problems than they are at formalizing preformal problems.

Then of course there's the heuristic of "it's fine to shout 'model uncertainty!' and hover on the sidelines, but it wasn't the armchair philosophers who did away with the epicycles, it was Kepler, who was up to his elbows in epicycle data." One of the big ways that you identify the things that need working on is by trying to solve the problem yourself. By asking how to actually build an aligned superintelligence, MIRI has generated a whole host of open technical problems, and I predict that that host will be a very valuable asset now that more and more people are turning their gaze towards AI alignment.


Buck Shlegeris: What's your response to Peter Hurford's arguments in his article Why I'm Skeptical Of Unproven Causes...?

Nate Soares: (1) One of Peter's first (implicit) points is that AI alignment is a speculative cause. I tend to disagree.

Imagine it's 1942. The Manhattan project is well under way, Leo Szilard has shown that it's possible to get a neutron chain reaction, and physicists are hard at work figuring out how to make an atom bomb. You suggest that this might be a fine time to start working on nuclear containment, so that, once humans are done bombing the everloving breath out of each other, they can harness nuclear energy for fun and profit. In this scenario, would nuclear containment be a "speculative cause"?

There are currently thousands of person-hours and billions of dollars going towards increasing AI capabilities every year. To call AI alignment a "speculative cause" in an environment such as this one seems fairly silly to me. In what sense is it speculative to work on improving the safety of the tools that other people are currently building as fast as they can? Now, I suppose you could argue that either (a) AI will never work or (b) it will be safe by default, but both those arguments seem pretty flimsy to me.

You might argue that it's a bit weird for people to claim that the most effective place to put charitable dollars is towards some field of scientific study. Aren't charitable dollars supposed to go to starving children? Isn't the NSF supposed to handle scientific funding? And I'd like to agree, but society has kinda been dropping the ball on this one.

If we had strong reason to believe that humans could build strangelets, and society were pouring billions of dollars and thousands of human-years into making strangelets, and almost no money or effort was going towards strangelet containment, and it looked like humanity was likely to create a strangelet sometime in the next hundred years, then yeah, I'd say that "strangelet safety" would be an extremely worthy cause.

How worthy? Hard to say. I agree with Peter that it's hard to figure out how to trade off "safety of potentially-very-highly-impactful technology that is currently under furious development" against "children are dying of malaria", but the only way I know how to trade those things off is to do my best to run the numbers, and my back-of-the-envelope calculations currently say that AI alignment is further behind than the globe is poor.

Now that the EA movement is starting to look more seriously into high-impact interventions on the frontiers of science & mathematics, we're going to need to come up with more sophisticated ways to assess the impacts and tradeoffs. I agree it's hard, but I don't think throwing out everything that doesn't visibly pay off in the extremely short term is the answer.

(2) Alternatively, you could argue that MIRI's approach is unlikely to work. That's one of Peter's explicit arguments: it's very hard to find interventions that reliably affect the future far in advance, especially when there aren't hard objective metrics. I have three disagreements with Peter on this point.

First, I think he picks the wrong reference class: yes, humans have a really hard time generating big social shifts on purpose. But that doesn't necessarily mean humans have a really hard time generating math -- in fact, humans have a surprisingly good track record when it comes to generating math!

Humans actually seem to be pretty good at putting theoretical foundations underneath various fields when they try, and various people have demonstrably succeeded at this task (Church & Turing did this for computing, Shannon did this for information theory, Kolmogorov did a fair bit of this for probability theory, etc.). This suggests to me that humans are much better at producing technical progress in an unexplored field than they are at generating social outcomes in a complex economic environment. (I'd be interested in any attempt to quantitatively evaluate this claim.)

Second, I agree in general that any one individual team isn't all that likely to solve the AI alignment problem on their own. But the correct response to that isn't "stop funding AI alignment teams" -- it's "fund more AI alignment teams"! If you're trying to ensure that nuclear power can be harnessed for the betterment of humankind, and you assign low odds to any particular research group solving the containment problem, then the answer isn't "don't fund any containment groups at all," the answer is "you'd better fund a few different containment groups, then!"

Third, I object to the whole "there's no feedback" claim. Did Kolmogorov have tight feedback when he was developing an early formalization of probability theory? It seems to me like the answer is "yes" -- figuring out what was & wasn't a mathematical model of the properties he was trying to capture served as a very tight feedback loop (mathematical theorems tend to be unambiguous), and indeed, it was sufficiently good feedback that Kolmogorov was successful in putting formal foundations underneath probability theory.


Interstice: What is your AI arrival timeline?

Nate Soares: Eventually. Predicting the future is hard. My 90% confidence interval conditioned on no global catastrophes is maybe 5 to 80 years. That is to say, I don't know.


Tarn Somervell Fletcher: What are MIRI's plans for publication over the next few years, whether peer-reviewed or arxiv-style publications?

More specifically, what are the a) long-term intentions and b) short-term actual plans for the publication of workshop results, and what kind of priority does that have?

Nate Soares: Great question! The short version is, writing more & publishing more (and generally engaging with the academic mainstream more) are very high on my priority list.

Mainstream publications have historically been fairly difficult for us, as until last year, AI alignment research was seen as fairly kooky. (We've had a number of papers rejected from various journals due to the "weird AI motivation.") Going forward, it looks like that will be less of an issue.

That said, writing capability is a huge bottleneck right now. Our researchers are currently trying to (a) run workshops, (b) engage with & evaluate promising potential researchers, (c) attend conferences, (d) produce new research, (e) write it up, and (f) get it published. That's a lot of things for a three-person research team to juggle! Priority number 1 is to grow the research team (because otherwise nothing will ever be unblocked), and we're aiming to hire a few new researchers before the year is through. After that, increasing our writing output is likely the next highest priority.

Expect our writing output this year to be similar to last year's (i.e., a small handful of peer reviewed papers and a larger handful of technical reports that might make it onto the arXiv), and then hopefully we'll have more & higher quality publications starting in 2016 (the publishing pipeline isn't particularly fast).


Tor Barstad: Among recruiting new talent and having funding for new positions, what is the greatest bottleneck?

Nare Soares: Right now we’re talent-constrained, but we’re also fairly well-positioned to solve that problem over the next six months. Jessica Taylor is joining us in august. We have another researcher or two pretty far along in the pipeline, and we’re running four or five more research workshops this summer, and CFAR is running a summer fellows program in July. It’s quite plausible that we’ll hire a handful of new researchers before the end of 2015, in which case our runway would start looking pretty short, and it’s pretty likely that we’ll be funding constrained again by the end of the year.


Diego Caleiro: I see a trend in the way new EAs concerned about the far future think about where to donate money that seems dangerous, it goes:

I am an EA and care about impactfulness and neglectedness -> Existential risk dominates my considerations -> AI is the most important risk -> Donate to MIRI.

The last step frequently involves very little thought, it borders on a cached thought.

Nate Soares: Huh, that hasn't been my experience. We have a number of potential donors who ring us up and ask who in AI alignment needs money the most at the moment. (In fact, last year, we directed a number of donors to FHI, who had much more of a funding gap than MIRI did at that time.)


Joshua Fox:

1. What are your plans for taking MIRI to the next level? What is the next level?

2. Now that MIRI is focused on math research (a good move) and not on outreach, there is less of a role for volunteers and supporters. With the donation from Elon Musk, some of which will presumably get to MIRI, the marginal value of small donations has gone down. How do you plan to keep your supporters engaged and donating? (The alternative, which is perhaps feasible, could be for MIRI to be an independent research institution, without a lot of public engagement, funded by a few big donors.)

Nate Soares:

1. (a) grow the research team, (b) engage more with mainstream academia. I'd also like to spend some time experimenting to figure out how to structure the research team so as to make it more effective (we have a lot of flexibility here that mainstream academic institutes don't have). Once we have the first team growing steadily and running smoothly, it's not entirely clear whether the next step will be (c.1) grow it faster or (c.2) spin up a second team inside MIRI taking a different approach to AI alignment. I'll punt that question to future-Nate.

2. So first of all, I'm not convinced that there's less of a role for supporters. If we had just ten people earning-to-give at the (amazing!) level of Ethan Dickinson, Jesse Liptrap, Mike Blume, or Alexei Andreev (note: Alexei recently stopped earning-to-give in order to found a startup), that would bring in as much money per year as the Thiel Foundation. (I think people often vastly overestimate how many people are earning-to-give to MIRI, and underestimate how useful it is: the small donors taken together make a pretty big difference!)

Furthermore, if we successfully execute on (a) above, then we're going to be burning through money quite a bit faster than before. An FLI grant (if we get one) will certainly help, but I expect it's going to be a little while before MIRI can support itself on large donations & grants alone.


Magnetic rings (the most mediocre superpower) A review.

18 Elo 30 July 2015 01:23PM

Following on from a few threads about superpowers and extra sense that humans can try to get; I have always been interested in the idea of putting a magnet in my finger for the benefits of extra-sensory perception.

Stories (occasional news articles) imply that having a magnet implanted in a finger in a place surrounded by nerves imparts a power of electric-sensation.  The ability to feel when there are electric fields around.  So that's pretty neat.  Only I don't really like the idea of cutting into myself (even if its done by a professional piercing artist).  

Only recently did I come across the suggestion that a magnetic ring could impart similar abilities and properties.  I was delighted at the idea of a similar and non-invasive version of the magnetic-implant (people with magnetic implants are commonly known as grinders within the community).  I was so keen on trying it that I went out and purchased a few magnetic rings of different styles and different properties.

Interestingly the direction that a magnetisation can be imparted to a ring-shaped object can be selected from 2 general types.  Magnetised across the diameter, or across the height of the cylinder shape.  (there is a 3rd type which is a ring consisting of 4 outwardly magnetised 1/4 arcs of magnetic metal suspended in a ring-casing. and a few orientations of that system).

I have now been wearing a Neodymium ND50 magnetic ring from supermagnetman.com for around two months.  The following is a description of my experiences with it.


When I first got the rings, I tried wearing more than one ring on each hand, I very quickly found out what happens when you wear two magnets close to each other. AKA they attract.  Within a day I was wearing one magnet on each hand.  What is interesting is what happens when you move two very strong magnets within each other's magnetic field.  You get the ability to feel a magnetic field, and roll it around in your hands.  I found myself taking typing breaks to play with the magnetic field between my fingers.  It was an interesting experience to be able to do that.  I also found I liked the snap as the two magnets pulled towards each other and regularly would play with them by moving them near each other.  For my experiences here I would encourage others to use magnets as a socially acceptable way to hide an ADHD twitch - or just a way to keep yourself amused if you don't have a phone to pull out and if you ever needed a reason to move.  I have previously used elastic bands around my wrist for a similar purpose.

The next thing that is interesting to note is what is or is not ferrous.  Fridges are made of ferrous metal but not on the inside.  Door handles are not usually ferrous, but the tongue and groove of the latch is.  metal railings are common, as are metal nails in wood.  Elevators and escalators have some metallic parts.  Light switches are often plastic but there is a metal screw holding them into the wall.  Tennis fencing is ferrous, the ends of usb cables are sometimes ferrous and sometimes not.  The cables are not ferrous.  except one I found. (they are probably made of copper)

 

Breaking technology

I had a concern that I would break my technology.  That would be bad.  overall I found zero broken pieces of technology.  In theory if you take a speaker which consists of a magnet and an electric coil and you mess around with its magnetic field it will be unhappy and maybe break.  That has not happened yet.  The same can be said for hard drives, magnetic memory devices, phone technology and other things that rely on electricity.  So far nothing has broken.  What I did notice is that my phone has a magnetic-sleep function on the top left.  i.e. it turns the screen off to hold the ring near that point.  For both benefit and detriment depending on where I am wearing the ring.

Metal shards

I spend some of my time in workshops that have metal shards lying around.  sometimes they are sharp, sometimes they are more like dust.  They end up coating the magnetic ring.  The sharp ones end up jabbing you, and the dust just looks like dirt on your skin.  in a few hours they tend to go away anyways, but it is something I have noticed

magnetic strength

Over the time I have been wearing the magnets their strength has dropped off significantly.  I am considering building a remagnetisation jig, but have not started any work on it.  obviously every time I ding something against it, every time I drop them - the magnetisation decreases a bit as the magnetic dipoles reorganise.

knives

I cook a lot.  Which means I find myself holding sharp knives fairly often.  The most dangerous thing that I noticed about these rings is that when I hold a ferrous knife in the normal way I hold a knife, the magnet has a tendency to shift the knife slightly or at a time when I don't want it to.  That sucks.  Don't wear them while playing with sharp objects like knives.  the last think you want to do is accidentally have your carrot-cutting turn into a finger-cutting event.  What is interesting as well is that some cutlery is made of ferrous metal and some is not.  also sometimes parts of a piece of cutlery are ferrous and some are non-ferrous.  i.e. my normal food-eating knife set has a ferrous blade part and a non-ferrous handle part.  I always figured they were the same, but the magnet says they are different materials.  Which is pretty neat.  I have found the same thing with spoons sometimes.  the scoop is ferrous and the handle is not.  I assume it would be because the scoop/blade parts need extra forming steps so need to be a more work-able metal.  Cheaper cutlery is not like this.

The same applies to hot pieces of metal.  Ovens, stoves, kettles, soldering irons...  When they accidentally move towards your fingers, or your fingers are compelled to be attracted to them.  Thats a slightly unsafe experience.

electric-sense

You know how when you run a microwave it buzzes, in a *vibrating* sorta way.  if you put your hand against the outside of a microwave you will feel the motor going.  Yea cool.  So having a magnetic ring means you can feel that without touching the microwave from about 20cm away.  There is a variability to it, better microwaves have more shielding on their motors and are leak less.  I tried to feel the electric field around power tools like a drill press, handheld tools like an orbital sander, computers, cars, appliances, which pretty much covers everything.  I also tried servers and the only thing that really had a buzzing field was a UPS machine (uninterupted power supply).  Which was cool.  Only other people had reported that any transformer - i.e. a computer charger would make that buzz.  I also carry a battery block with me and that had no interesting fields.  Totally not exciting.  As for moving electrical charge.  Cant feel it.  If powerpoints are receiving power - nope.  not dying by electrocution - no change.

boring superpower

There is a reason I call magnetic rings a boring superpower.  The only real super-power I have been imparted is the power to pick up my keys without using my fingers.  and also maybe hold my keys without trying to.  As superpowers go - thats pretty lame.  But kinda nifty.  I don't know. I wouldn't insist people do it for the life-changing purposes.

 

Did I find a human-superpower?  No.  But I am glad I tried it.

 

Any questions?  Any experimenting I should try?

Philosophical differences

18 ahbwramc 13 June 2015 01:16AM

[Many people have been complaining about the lack of new content on LessWrong lately, so I thought I'd cross-post my latest blog post here in discussion. Feel free to critique the content as much as you like, but please do keep in mind that I wrote this for my personal blog and not with LW in mind specifically, so some parts might not be up to LW standards, whereas others might be obvious to everyone here. In other words...well, be gentle]

---------------------------

You know what’s scarier than having enemy soldiers at your border?

Having sleeper agents within your borders.

Enemy soldiers are malevolent, but they are at least visibly malevolent. You can see what they’re doing; you can fight back against them or set up defenses to stop them. Sleeper agents on the other hand are malevolent and invisible. They are a threat and you don’t know that they’re a threat. So when a sleeper agent decides that it’s time to wake up and smell the gunpowder, not only will you be unable to stop them, but they’ll be in a position to do far more damage than a lone soldier ever could. A single well-placed sleeper agent can take down an entire power grid, or bring a key supply route to a grinding halt, or – in the worst case – kill thousands with an act of terrorism, all without the slightest warning.

Okay, so imagine that your country is in wartime, and that a small group of vigilant citizens has uncovered an enemy sleeper cell in your city. They’ve shown you convincing evidence for the existence of the cell, and demonstrated that the cell is actively planning to commit some large-scale act of violence – perhaps not imminently, but certainly in the near-to-mid-future. Worse, the cell seems to have even more nefarious plots in the offing, possibly involving nuclear or biological weapons.

Now imagine that when you go to investigate further, you find to your surprise and frustration that no one seems to be particularly concerned about any of this. Oh sure, they acknowledge that in theory a sleeper cell could do some damage, and that the whole matter is probably worthy of further study. But by and large they just hear you out and then shrug and go about their day. And when you, alarmed, point out that this is not just a theory – that you have proof that a real sleeper cell is actually operating and making plans right now – they still remain remarkably blase. You show them the evidence, but they either don’t find it convincing, or simply misunderstand it at a very basic level (“A wiretap? But sleeper agents use cellphones, and cellphones are wireless!”). Some people listen but dismiss the idea out of hand, claiming that sleeper cell attacks are “something that only happen in the movies”. Strangest of all, at least to your mind, are the people who acknowledge that the evidence is convincing, but say they still aren’t concerned because the cell isn’t planning to commit any acts of violence imminently, and therefore won’t be a threat for a while. In the end, all of your attempts to raise the alarm are to no avail, and you’re left feeling kind of doubly scared – scared first because you know the sleeper cell is out there, plotting some heinous act, and scared second because you know you won’t be able to convince anyone of that fact before it’s too late to do anything about it.

This is roughly how I feel about AI risk.

You see, I think artificial intelligence is probably the most significant existential threat facing humanity right now. This, to put it mildly, is something of a fringe position in most intellectual circles (although that’s becoming less and less true as time goes on), and I’ll grant that it sounds kind of absurd. But regardless of whether or not you think I’m right to be scared of AI, you can imagine how the fact that AI risk is really hard to explain would make me even more scared about it. Threats like nuclear war or an asteroid impact, while terrifying, at least have the virtue of being simple to understand – it’s not exactly hard to sell people on the notion that a 2km hunk of rock colliding with the planet might be a bad thing. As a result people are aware of these threats and take them (sort of) seriously, and various organizations are (sort of) taking steps to stop them.

AI is different, though. AI is more like the sleeper agents I described above – frighteningly invisible. The idea that AI could be a significant risk is not really on many people’s radar at the moment, and worse, it’s an idea that resists attempts to put it on more people’s radar, because it’s so bloody confusing a topic even at the best of times. Our civilization is effectively blind to this threat, and meanwhile AI research is making progress all the time. We’re on the Titanic steaming through the North Atlantic, unaware that there’s an iceberg out there with our name on it – and the captain is ordering full-speed ahead.

(That’s right, not one but two ominous metaphors. Can you see that I’m serious?)

But I’m getting ahead of myself. I should probably back up a bit and explain where I’m coming from.

Artificial intelligence has been in the news lately. In particular, various big names like Elon Musk, Bill Gates, and Stephen Hawking have all been sounding the alarm in regards to AI, describing it as the greatest threat that our species faces in the 21st century. They (and others) think it could spell the end of humanity – Musk said, “If I had to guess what our biggest existential threat is, it’s probably [AI]”, and Gates said, “I…don’t understand why some people are not concerned [about AI]”.

Of course, others are not so convinced – machine learning expert Andrew Ng said that “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars”.

In this case I happen to agree with the Musks and Gates of the world – I think AI is a tremendous threat that we need to focus much of our attention on it in the future. In fact I’ve thought this for several years, and I’m kind of glad that the big-name intellectuals are finally catching up.

Why do I think this? Well, that’s a complicated subject. It’s a topic I could probably spend a dozen blog posts on and still not get to the bottom of. And maybe I should spend those dozen-or-so blog posts on it at some point – it could be worth it. But for now I’m kind of left with this big inferential gap that I can’t easily cross. It would take a lot of explaining to explain my position in detail. So instead of talking about AI risk per se in this post, I thought I’d go off in a more meta-direction – as I so often do – and talk about philosophical differences in general. I figured if I couldn’t make the case for AI being a threat, I could at least make the case for making the case for AI being a threat.

(If you’re still confused, and still wondering what the whole deal is with this AI risk thing, you can read a not-too-terrible popular introduction to the subject here, or check out Nick Bostrom’s TED Talk on the topic. Bostrom also has a bestselling book out called Superintelligence. The one sentence summary of the problem would be: how do we get a superintelligent entity to want what we want it to want?)

(Trust me, this is much much harder than it sounds)

So: why then am I so meta-concerned about AI risk? After all, based on the previous couple paragraphs it seems like the topic actually has pretty decent awareness: there are popular internet articles and TED talks and celebrity intellectual endorsements and even bestselling books! And it’s true, there’s no doubt that a ton of progress has been made lately. But we still have a very long way to go. If you had seen the same number of online discussions about AI that I’ve seen, you might share my despair. Such discussions are filled with replies that betray a fundamental misunderstanding of the problem at a very basic level. I constantly see people saying things like “Won’t the AI just figure out what we want?”, or “If the AI gets dangerous why can’t we just unplug it?”, or “The AI can’t have free will like humans, it just follows its programming”, or “lol so you’re scared of Skynet?”, or “Why not just program it to maximize happiness?”.

Having read a lot about AI, these misunderstandings are frustrating to me. This is not that unusual, of course – pretty much any complex topic is going to have people misunderstanding it, and misunderstandings often frustrate me. But there is something unique about the confusions that surround AI, and that’s the extent to which the confusions are philosophical in nature.

Why philosophical? Well, artificial intelligence and philosophy might seem very distinct at first glance, but look closer and you’ll see that they’re connected to one another at a very deep level. Take almost any topic of interest to philosophers – free will, consciousness, epistemology, decision theory, metaethics – and you’ll find an AI researcher looking into the same questions. In fact I would go further and say that those AI researchers are usually doing a better job of approaching the questions. Daniel Dennet said that “AI makes philosophy honest”, and I think there’s a lot of truth to that idea. You can’t write fuzzy, ill-defined concepts into computer code. Thinking in terms of having to program something that actually works takes your head out of the philosophical clouds, and puts you in a mindset of actually answering questions.

All of which is well and good. But the problem with looking at philosophy through the lens of AI is that it’s a two-way street – it means that when you try to introduce someone to the concepts of AI and AI risk, they’re going to be hauling all of their philosophical baggage along with them.

And make no mistake, there’s a lot of baggage. Philosophy is a discipline that’s notorious for many things, but probably first among them is a lack of consensus (I wouldn’t be surprised if there’s not even a consensus among philosophers about how much consensus there is among philosophers). And the result of this lack of consensus has been a kind of grab-bag approach to philosophy among the general public – people see that even the experts are divided, and think that that means they can just choose whatever philosophical position they want.

Want. That’s the key word here. People treat philosophical beliefs not as things that are either true or false, but as choices – things to be selected based on their personal preferences, like picking out a new set of curtains. They say “I prefer to believe in a soul”, or “I don’t like the idea that we’re all just atoms moving around”. And why shouldn’t they say things like that? There’s no one to contradict them, no philosopher out there who can say “actually, we settled this question a while ago and here’s the answer”, because philosophy doesn’t settle things. It’s just not set up to do that. Of course, to be fair people seem to treat a lot of their non-philosophical beliefs as choices as well (which frustrates me to no end) but the problem is particularly pronounced in philosophy. And the result is that people wind up running around with a lot of bad philosophy in their heads.

(Oh, and if that last sentence bothered you, if you’d rather I said something less judgmental like “philosophy I disagree with” or “philosophy I don’t personally happen to hold”, well – the notion that there’s no such thing as bad philosophy is exactly the kind of bad philosophy I’m talking about)

(he said, only 80% seriously)

Anyway, I find this whole situation pretty concerning. Because if you had said to me that in order to convince people of the significance of the AI threat, all we had to do was explain to them some science, I would say: no problem. We can do that. Our society has gotten pretty good at explaining science; so far the Great Didactic Project has been far more successful than it had any right to be. We may not have gotten explaining science down to a science, but we’re at least making progress. I myself have been known to explain scientific concepts to people every now and again, and fancy myself not half-bad at it.

Philosophy, though? Different story. Explaining philosophy is really, really hard. It’s hard enough that when I encounter someone who has philosophical views I consider to be utterly wrong or deeply confused, I usually don’t even bother trying to explain myself – even if it’s someone I otherwise have a great deal of respect for! Instead I just disengage from the conversation. The times I’ve done otherwise, with a few notable exceptions, have only ended in frustration – there’s just too much of a gap to cross in one conversation. And up until now that hasn’t really bothered me. After all, if we’re being honest, most philosophical views that people hold aren’t that important in grand scheme of things. People don’t really use their philosophical views to inform their actions – in fact, probably the main thing that people use philosophy for is to sound impressive at parties.

AI risk, though, has impressed upon me an urgency in regards to philosophy that I’ve never felt before. All of a sudden it’s important that everyone have sensible notions of free will or consciousness; all of a sudden I can’t let people get away with being utterly confused about metaethics.

All of a sudden, in other words, philosophy matters.

I’m not sure what to do about this. I mean, I guess I could just quit complaining, buckle down, and do the hard work of getting better at explaining philosophy. It’s difficult, sure, but it’s not infinitely difficult. I could write blogs posts and talk to people at parties, and see what works and what doesn’t, and maybe gradually start changing a few people’s minds. But this would be a long and difficult process, and in the end I’d probably only be able to affect – what, a few dozen people? A hundred?

And it would be frustrating. Arguments about philosophy are so hard precisely because the questions being debated are foundational. Philosophical beliefs form the bedrock upon which all other beliefs are built; they are the premises from which all arguments start. As such it’s hard enough to even notice that they’re there, let alone begin to question them. And when you do notice them, they often seem too self-evident to be worth stating.

Take math, for example – do you think the number 5 exists, as a number?

Yes? Okay, how about 700? 3 billion? Do you think it’s obvious that numbers just keep existing, even when they get really big?

Well, guess what – some philosophers debate this!

It’s actually surprisingly hard to find an uncontroversial position in philosophy. Pretty much everything is debated. And of course this usually doesn’t matter – you don’t need philosophy to fill out a tax return or drive the kids to school, after all. But when you hold some foundational beliefs that seem self-evident, and you’re in a discussion with someone else who holds different foundational beliefs, which they also think are self-evident, problems start to arise. Philosophical debates usually consist of little more than two people talking past one another, with each wondering how the other could be so stupid as to not understand the sheer obviousness of what they’re saying. And the annoying this is, both participants are correct – in their own framework, their positions probably are obvious. The problem is, we don’t all share the same framework, and in a setting like that frustration is the default, not the exception.

This is not to say that all efforts to discuss philosophy are doomed, of course. People do sometimes have productive philosophical discussions, and the odd person even manages to change their mind, occasionally. But to do this takes a lot of effort. And when I say a lot of effort, I mean a lot of effort. To make progress philosophically you have to be willing to adopt a kind of extreme epistemic humility, where your intuitions count for very little. In fact, far from treating your intuitions as unquestionable givens, as most people do, you need to be treating them as things to be carefully examined and scrutinized with acute skepticism and even wariness. Your reaction to someone having a differing intuition from you should not be “I’m right and they’re wrong”, but rather “Huh, where does my intuition come from? Is it just a featureless feeling or can I break it down further and explain it to other people? Does it accord with my other intuitions? Why does person X have a different intuition, anyway?” And most importantly, you should be asking “Do I endorse or reject this intuition?”. In fact, you could probably say that the whole history of philosophy has been little more than an attempt by people to attain reflective equilibrium among their different intuitions – which of course can’t happen without the willingness to discard certain intuitions along the way when they conflict with others.

I guess what I’m trying to say is: when you’re discussing philosophy with someone and you have a disagreement, your foremost goal should be to try to find out exactly where your intuitions differ. And once you identify that, from there the immediate next step should be to zoom in on your intuitions – to figure out the source and content of the intuition as much as possible. Intuitions aren’t blank structureless feelings, as much as it might seem like they are. With enough introspection intuitions can be explicated and elucidated upon, and described in some detail. They can even be passed on to other people, assuming at least some kind of basic common epistemological framework, which I do think all humans share (yes, even objective-reality-denying postmodernists).

Anyway, this whole concept of zooming in on intuitions seems like an important one to me, and one that hasn’t been emphasized enough in the intellectual circles I travel in. When someone doesn’t agree with some basic foundational belief that you have, you can’t just throw up your hands in despair – you have to persevere and figure out why they don’t agree. And this takes effort, which most people aren’t willing to expend when they already see their debate opponent as someone who’s being willfully stupid anyway. But – needless to say – no one thinks of their positions as being a result of willful stupidity. Pretty much everyone holds beliefs that seem obvious within the framework of their own worldview. So if you want to change someone’s mind with respect to some philosophical question or another, you’re going to have to dig deep and engage with their worldview. And this is a difficult thing to do.

Hence, the philosophical quagmire that we find our society to be in.

It strikes me that improving our ability to explain and discuss philosophy amongst one another should be of paramount importance to most intellectually serious people. This applies to AI risk, of course, but also to many everyday topics that we all discuss: feminism, geopolitics, environmentalism, what have you – pretty much everything we talk about grounds out to philosophy eventually, if you go deep enough or meta enough. And to the extent that we can’t discuss philosophy productively right now, we can’t make progress on many of these important issues.

I think philosophers should – to some extent – be ashamed of the state of their field right now. When you compare philosophy to science it’s clear that science has made great strides in explaining the contents of its findings to the general public, whereas philosophy has not. Philosophers seem to treat their field as being almost inconsequential, as if whatever they conclude at some level won’t matter. But this clearly isn’t true – we need vastly improved discussion norms when it comes to philosophy, and we need far greater effort on the part of philosophers when it comes to explaining philosophy, and we need these things right now. Regardless of what you think about AI, the 21st century will clearly be fraught with difficult philosophical problems – from genetic engineering to the ethical treatment of animals to the problem of what to do with global poverty, it’s obvious that we will soon need philosophical answers, not just philosophical questions. Improvements in technology mean improvements in capability, and that means that things which were once merely thought experiments will be lifted into the realm of real experiments.

I think the problem that humanity faces in the 21st century is an unprecedented one. We’re faced with the task of actually solving philosophy, not just doing philosophy. And if I’m right about AI, then we have exactly one try to get it right. If we don’t, well..

Well, then the fate of humanity may literally hang in the balance.

Confession Thread: Mistakes as an aspiring rationalist

18 diegocaleiro 02 June 2015 06:10PM

We looked at the cloudy night sky and thought it would be interesting to share the ways in which, in the past, we made mistakes we would have been able to overcome, if only we had been stronger as rationalists. The experience felt valuable and humbling. So why not do some more of it on Lesswrong?

An antithesis to the Bragging Thread, this is a thread to share where we made mistakes. Where we knew we could, but didn't. Where we felt we were wrong, but carried on anyway.

As with the recent group bragging thread, anything you've done wrong since the comet killed the dinosaurs is fair game, and if it happens to be a systematic mistake that over long periods of time systematically curtailed your potential, that others can try to learn avoiding, better. 

This thread is an attempt to see if there are exceptions to the cached thought that life experience cannot be learned but has to be lived. Let's test this belief together!

LW survey: Effective Altruists and donations

18 gwern 14 May 2015 12:44AM

(Markdown source)

“Portrait of EAs I know”, su3su2u1:

But I note from googling for surveys that the median charitable donation for an EA in the Less Wrong survey was 0.

Yvain:

Two years ago I got a paying residency, and since then I’ve been donating 10% of my salary, which works out to about $5,000 a year. In two years I’ll graduate residency, start making doctor money, and then I hope to be able to donate maybe eventually as much as $25,000 - $50,000 per year. But if you’d caught me five years ago, I would have been one of those people who wrote a lot about it and was very excited about it but put down $0 in donations on the survey.

Data preparation:

set.seed(2015-05-13)
survey2013 <- read.csv("http://www.gwern.net/docs/lwsurvey/2013.csv", header=TRUE)
survey2013$EffectiveAltruism2 <- NA
s2013 <- subset(survey2013, select=c(Charity,Effective.Altruism,EffectiveAltruism2,Work.Status,
Profession,Degree,Age,Income))
colnames(s2013) <- c("Charity","EffectiveAltruism","EffectiveAltruism2","WorkStatus","Profession",
"Degree","Age","Income")
s2013$Year <- 2013
survey2014 <- read.csv("http://www.gwern.net/docs/lwsurvey/2014.csv", header=TRUE)
s2014 <- subset(survey2014, PreviousSurveys!="Yes", select=c(Charity,EffectiveAltruism,EffectiveAltruism2,
WorkStatus,Profession,Degree,Age,Income))
s2014$Year <- 2014
survey <- rbind(s2013, s2014)
# replace empty fields with NAs:
survey[survey==""] <- NA; survey[survey==" "] <- NA
# convert money amounts from string to number:
survey$Charity <- as.numeric(as.character(survey$Charity))
survey$Income <- as.numeric(as.character(survey$Income))
# both Charity & Income are skewed, like most monetary amounts, so log transform as well:
survey$CharityLog <- log1p(survey$Charity)
survey$IncomeLog <- log1p(survey$Income)
# age:
survey$Age <- as.integer(as.character(survey$Age))
# prodigy or no, I disbelieve any LW readers are <10yo (bad data? malicious responses?):
survey$Age <- ifelse(survey$Age >= 10, survey$Age, NA)
# convert Yes/No to boolean TRUE/FALSE:
survey$EffectiveAltruism <- (survey$EffectiveAltruism == "Yes")
survey$EffectiveAltruism2 <- (survey$EffectiveAltruism2 == "Yes")
summary(survey)
## Charity EffectiveAltruism EffectiveAltruism2 WorkStatus
## Min. : 0.000 Mode :logical Mode :logical Student :905
## 1st Qu.: 0.000 FALSE:1202 FALSE:450 For-profit work :736
## Median : 50.000 TRUE :564 TRUE :45 Self-employed :154
## Mean : 1070.931 NA's :487 NA's :1758 Unemployed :149
## 3rd Qu.: 400.000 Academics (on the teaching side):104
## Max. :110000.000 (Other) :179
## NA's :654 NA's : 26
## Profession Degree Age
## Computers (practical: IT programming etc.) :478 Bachelor's :774 Min. :13.00000
## Other :222 High school:597 1st Qu.:21.00000
## Computers (practical: IT, programming, etc.):201 Master's :419 Median :25.00000
## Mathematics :185 None :125 Mean :27.32494
## Engineering :170 Ph D. :125 3rd Qu.:31.00000
## (Other) :947 (Other) :189 Max. :72.00000
## NA's : 50 NA's : 24 NA's :28
## Income Year CharityLog IncomeLog
## Min. : 0.00 2013:1547 Min. : 0.000000 Min. : 0.000000
## 1st Qu.: 10000.00 2014: 706 1st Qu.: 0.000000 1st Qu.: 9.210440
## Median : 33000.00 Median : 3.931826 Median :10.404293
## Mean : 75355.69 Mean : 3.591102 Mean : 9.196442
## 3rd Qu.: 80000.00 3rd Qu.: 5.993961 3rd Qu.:11.289794
## Max. :10000000.00 Max. :11.608245 Max. :16.118096
## NA's :993 NA's :654 NA's :993
# lavaan doesn't like categorical variables and doesn't automatically expand out into dummies like lm/glm,
# so have to create the dummies myself:
survey$Degree <- gsub("2","two",survey$Degree)
survey$Degree <- gsub("'","",survey$Degree)
survey$Degree <- gsub("/","",survey$Degree)
survey$WorkStatus <- gsub("-","", gsub("\\(","",gsub("\\)","",survey$WorkStatus)))
library(qdapTools)
survey <- cbind(survey, mtabulate(strsplit(gsub(" ", "", as.character(survey$Degree)), ",")),
mtabulate(strsplit(gsub(" ", "", as.character(survey$WorkStatus)), ",")))
write.csv(survey, file="2013-2014-lw-ea.csv", row.names=FALSE)

Analysis:

survey <- read.csv("http://www.gwern.net/docs/lwsurvey/2013-2014-lw-ea.csv")
# treat year as factor for fixed effect:
survey$Year <- as.factor(survey$Year)
median(survey[survey$EffectiveAltruism,]$Charity, na.rm=TRUE)
## [1] 100
median(survey[!survey$EffectiveAltruism,]$Charity, na.rm=TRUE)
## [1] 42.5
# t-tests are inappropriate due to non-normal distribution of donations:
wilcox.test(Charity ~ EffectiveAltruism, conf.int=TRUE, data=survey)
## Wilcoxon rank sum test with continuity correction
##
## data: Charity by EffectiveAltruism
## W = 214215, p-value = 4.811186e-08
## alternative hypothesis: true location shift is not equal to 0
## 95% confidence interval:
## -4.999992987e+01 -1.275881408e-05
## sample estimates:
## difference in location
## -19.99996543
library(ggplot2)
qplot(Age, CharityLog, color=EffectiveAltruism, data=survey) + geom_point(size=I(3))
## https://i.imgur.com/wd5blg8.png
qplot(Age, CharityLog, color=EffectiveAltruism,
data=na.omit(subset(survey, select=c(Age, CharityLog, EffectiveAltruism)))) +
 geom_point(size=I(3)) + stat_smooth()
## https://i.imgur.com/UGqf8wn.png
# you might think that we can't treat Age linearly because this looks like a quadratic or
# logarithm, but when I fitted some curves, charity donations did not seem to flatten out
# appropriately, and the GAM/loess wiggly-but-increasing line seems like a better summary.
# Try looking at the asymptotes & quadratics split by group as follows:
#
## n1 <- nls(CharityLog ~ SSasymp(as.integer(Age), Asym, r0, lrc),
## data=survey[survey$EffectiveAltruism,], start=list(Asym=6.88, r0=-4, lrc=-3))
## n2 <- nls(CharityLog ~ SSasymp(as.integer(Age), Asym, r0, lrc),
## data=survey[!survey$EffectiveAltruism,], start=list(Asym=6.88, r0=-4, lrc=-3))
## with(survey, plot(Age, CharityLog))
## points(predict(n1, newdata=data.frame(Age=0:70)), col="blue")
## points(predict(n2, newdata=data.frame(Age=0:70)), col="red")
##
## l1 <- lm(CharityLog ~ Age + I(Age^2), data=survey[survey$EffectiveAltruism,])
## l2 <- lm(CharityLog ~ Age + I(Age^2), data=survey[!survey$EffectiveAltruism,])
## with(survey, plot(Age, CharityLog));
## points(predict(l1, newdata=data.frame(Age=0:70)), col="blue")
## points(predict(l2, newdata=data.frame(Age=0:70)), col="red")
#
# So I will treat Age as a linear additive sort of thing.

2013-2014 LW survey respondents: self-reported charity donation vs self-reported age, split by self-identifying as EA or not Likewise, but with GAM-smoothed curves for EA vs non-EA

# for the regression, we want to combine EffectiveAltruism/EffectiveAltruism2 into a single measure, EA, so
# a latent variable in a SEM; then we use EA plus the other covariates to estimate the CharityLog.
library(lavaan)
model1 <- " # estimate EA latent variable:
 EA =~ EffectiveAltruism + EffectiveAltruism2
 CharityLog ~ EA + Age + IncomeLog + Year +
 # Degree dummies:
 None + Highschool + twoyeardegree + Bachelors + Masters + Other +
 MDJDotherprofessionaldegree + PhD. +
 # WorkStatus dummies:
 Independentlywealthy + Governmentwork + Forprofitwork +
 Selfemployed + Nonprofitwork + Academicsontheteachingside +
 Student + Homemaker + Unemployed
 "
fit1 <- sem(model = model1, missing="fiml", data = survey); summary(fit1)
## lavaan (0.5-16) converged normally after 197 iterations
##
## Number of observations 2253
##
## Number of missing patterns 22
##
## Estimator ML
## Minimum Function Test Statistic 90.659
## Degrees of freedom 40
## P-value (Chi-square) 0.000
##
## Parameter estimates:
##
## Information Observed
## Standard Errors Standard
##
## Estimate Std.err Z-value P(>|z|)
## Latent variables:
## EA =~
## EffectvAltrsm 1.000
## EffctvAltrsm2 0.355 0.123 2.878 0.004
##
## Regressions:
## CharityLog ~
## EA 1.807 0.621 2.910 0.004
## Age 0.085 0.009 9.527 0.000
## IncomeLog 0.241 0.023 10.468 0.000
## Year 0.319 0.157 2.024 0.043
## None -1.688 2.079 -0.812 0.417
## Highschool -1.923 2.059 -0.934 0.350
## twoyeardegree -1.686 2.081 -0.810 0.418
## Bachelors -1.784 2.050 -0.870 0.384
## Masters -2.007 2.060 -0.974 0.330
## Other -2.219 2.142 -1.036 0.300
## MDJDthrprfssn -1.298 2.095 -0.619 0.536
## PhD. -1.977 2.079 -0.951 0.341
## Indpndntlywlt 1.175 2.119 0.555 0.579
## Governmentwrk 1.183 1.969 0.601 0.548
## Forprofitwork 0.677 1.940 0.349 0.727
## Selfemployed 0.603 1.955 0.309 0.758
## Nonprofitwork 0.765 1.973 0.388 0.698
## Acdmcsnthtchn 1.087 1.970 0.551 0.581
## Student 0.879 1.941 0.453 0.650
## Homemaker 1.071 2.498 0.429 0.668
## Unemployed 0.606 1.956 0.310 0.757
##
## Intercepts:
## EffectvAltrsm 0.319 0.011 28.788 0.000
## EffctvAltrsm2 0.109 0.012 8.852 0.000
## CharityLog -0.284 0.737 -0.385 0.700
## EA 0.000
##
## Variances:
## EffectvAltrsm 0.050 0.056
## EffctvAltrsm2 0.064 0.008
## CharityLog 7.058 0.314
## EA 0.168 0.056
# simplify:
model2 <- " # estimate EA latent variable:
 EA =~ EffectiveAltruism + EffectiveAltruism2
 CharityLog ~ EA + Age + IncomeLog + Year
 "
fit2 <- sem(model = model2, missing="fiml", data = survey); summary(fit2)
## lavaan (0.5-16) converged normally after 55 iterations
##
## Number of observations 2253
##
## Number of missing patterns 22
##
## Estimator ML
## Minimum Function Test Statistic 70.134
## Degrees of freedom 6
## P-value (Chi-square) 0.000
##
## Parameter estimates:
##
## Information Observed
## Standard Errors Standard
##
## Estimate Std.err Z-value P(>|z|)
## Latent variables:
## EA =~
## EffectvAltrsm 1.000
## EffctvAltrsm2 0.353 0.125 2.832 0.005
##
## Regressions:
## CharityLog ~
## EA 1.770 0.619 2.858 0.004
## Age 0.085 0.009 9.513 0.000
## IncomeLog 0.241 0.023 10.550 0.000
## Year 0.329 0.156 2.114 0.035
##
## Intercepts:
## EffectvAltrsm 0.319 0.011 28.788 0.000
## EffctvAltrsm2 0.109 0.012 8.854 0.000
## CharityLog -1.331 0.317 -4.201 0.000
## EA 0.000
##
## Variances:
## EffectvAltrsm 0.049 0.057
## EffctvAltrsm2 0.064 0.008
## CharityLog 7.111 0.314
## EA 0.169 0.058
# simplify even further:
summary(lm(CharityLog ~ EffectiveAltruism + EffectiveAltruism2 + Age + IncomeLog, data=survey))
## ...Residuals:
## Min 1Q Median 3Q Max
## -7.6813410 -1.7922422 0.3325694 1.8440610 6.5913961
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.06062203 0.57659518 -3.57378 0.00040242
## EffectiveAltruismTRUE 1.26761425 0.37515124 3.37894 0.00081163
## EffectiveAltruism2TRUE 0.03596335 0.54563991 0.06591 0.94748766
## Age 0.09411164 0.01869218 5.03481 7.7527e-07
## IncomeLog 0.32140793 0.04598392 6.98957 1.4511e-11
##
## Residual standard error: 2.652323 on 342 degrees of freedom
## (1906 observations deleted due to missingness)
## Multiple R-squared: 0.2569577, Adjusted R-squared: 0.2482672
## F-statistic: 29.56748 on 4 and 342 DF, p-value: < 2.2204e-16

Note these increases are on a log-dollars scale.

[link] FLI's recommended project grants for AI safety research announced

17 Kaj_Sotala 01 July 2015 03:27PM

http://futureoflife.org/misc/2015awardees

You may recognize several familiar names there, such as Paul Christiano, Benja Fallenstein, Katja Grace, Nick Bostrom, Anna Salamon, Jacob Steinhardt, Stuart Russell... and me. (the $20,000 for my project was the smallest grant that they gave out, but hey, I'm definitely not complaining. ^^)

Two Zendo-inspired games

17 StephenBarnes 22 June 2015 03:47PM

LW has often discussed the inductive logic game Zendo, as a possible way of training rationality. But I couldn't find any computer implementations of Zendo online.

So I built two (fairly similar) games inspired by Zendo; they generate rules and play as sensei. The code is on GitHub, along with some more explanation. To run the games you'll need to install Python 3, and Scikit-Learn for the second game; see the readme.

All bugfixes and improvements are welcome. For instance, more rule classes or features would improve the game and be pretty easy to code. Also, if anyone has a website and wants to host this playable online (with CGI, say), that would be awesome.

Seeking geeks interested in bioinformatics

17 bokov 22 June 2015 01:44PM

I work at a small but feisty research team whose focus is biomedical informatics, i.e. mining biomedical data. Especially anonymized hospital records pooled over multiple healthcare networks. My personal interest is ultimately life-extension, and my colleagues are warming up to the idea as well. But the short-term goal that will be useful many different research areas is building infrastructure to massively accelerate hypothesis testing on and modelling of retrospective human data.

 

We have a job posting here (permanent, non-faculty, full-time, benefits):

https://www.uthscsajobs.com/postings/3113

 

If you can program, want to work in an academic research setting, and can relocate to San Antonio, TX, I invite you to apply. Thanks.

Note: The first step of the recruitment process will be a coding challenge, which will include an arithmetical or string-manipulation problem to solve in real-time using a language and developer tools of your choice.

edit: If you tried applying and were unable to access the posting, it's because the link has changed, our HR has an automated process that periodically expires the links for some reason. I have now updated the job post link.

Short Story: Quarantine

17 Dias 10 June 2015 01:21AM

June 2nd, 42 After Fall
Somewhere in the Colorado Mountains

They first caught sight of the man walking a few miles from the compound. At least it looked like a man. Faded jeans, white t-shirt, light jacket, rucksack. White skin, light brown hair. No obvious disabilities. No logos.

They kept him under surveillance as he approached. In other times they might have shot him on sight, but not now. They were painfully aware of the bounds of sustainable genetic diversity, so instead they drove over in a battered van, rifles loaded, industrial earmuffs in place. Once he was on his knees, they sent Javid the Unhearing over to bind and gag him, then bundled him into the van. No reason to risk exposure.

Javid had not always been deaf, but it was an honor. Some must sacrifice for the good of the others, and he was proud to defend the Sanctum at Rogers Ford.

Once back at the complex, they moved the man to a sound-proofed holding room and unbound him. An ancient PC sat on the desk, marked “Imp Association”. The people did not know who the Imp Association were, but they were grateful for it. Perhaps it was a gift from Olson. Praise be to Olson.

With little else to do, the man sat down and read the instructions on the screen. A series of words showed, and he was commanded to select left or right based on various different criteria. It was very confusing.

In a different room, watchers huddled around a tiny screen, looking at a series of numbers.

REP/DEM 0.0012 0.39 0.003

Good. That was a very good start.

FEM/MRA -0.0082 0.28 -0.029

SJW/NRX 0.0065 0.54 0.012

Eventually they passed the lines the catechism denoted “purge with fire and never speak thereof”, on to those merely marked as “highly dangerous”.

KO/PEP 0.1781 0.6 0.297

Not as good, but still within the proscribed tolerances. They would run the supplemental.

T_JCB/T_EWD -0.0008 1.2 -0.001

The test continued for some time, until eventually the cleric intoned, “The Trial by Fish is complete. He has passed the Snedecor Fish.” The people nodded as if they understood, then proceeded to the next stage.

This was more dangerous. This required a sacrifice.

She was young – just 15 years old. Fresh faced with long blond hair tied back, Sophia had a cute smile: she was perfect for the duty. Her family were told it was an honor to have their daughter selected.

Sophia entered the room, trepidation in her head, a smile on her face. Casually, she offered him a drink, “Hey, sorry you have to go through all this testin’. You must be hot! Would you like a co cuh?” Her relaxed intonation disguised the fact that these words were the proscribed words, passed down through generations, memorized and cherished as a ward against evil. He accepted the bottle of dark liquid and drank, before tossing the recyclable container in the bin.

In the other room, a box marked ‘ECO’ was ticked off.

“Oh, I’m sorry! I made a mistake – that’s pep-see. I’m so sorry!” she gushed in apology. He assured her it was fine.

In the other room, the cleric satisfied himself that the loyalty brand was burning at zero.

She moved on to the next proscribed question, with the ordained level of casualness, “Say, I know this is a silly question, but do you ever get a song stuck in your head?”

“Errr, what?”

“You know, like you just can’t stop singing it to yourself? Yeah?” Of course, she had no idea what this was like. She was alive.

“Ummm, sorry, no.”

She turned and left the room, relief filling her eyes.

After three more days of testing, the man was allowed into the compound. Despite the ravages of an evolution with a generational frequency a hundred times that of humanity, he had somehow preserved himself. He was clean of viral memetic payload. He was alive.

 

------

Cross-posted on my blog

Wild Moral Dilemmas

17 sixes_and_sevens 12 May 2015 12:56PM

[CW: This post talks about personal experience of moral dilemmas. I can see how some people might be distressed by thinking about this.]

Have you ever had to decide between pushing a fat person onto some train tracks or letting five other people get hit by a train? Maybe you have a more exciting commute than I do, but for me it's just never come up.

In spite of this, I'm unusually prepared for a trolley problem, in a way I'm not prepared for, say, being offered a high-paying job at an unquantifiably-evil company. Similarly, if a friend asked me to lie to another friend about something important to them, I probably wouldn't carry out a utilitarian cost-benefit analysis. It seems that I'm happy to adopt consequentialist policy, but when it comes to personal quandaries where I have to decide for myself, I start asking myself about what sort of person this decision makes me. What's more, I'm not sure this is necessarily a bad heuristic in a social context.

It's also noteworthy (to me, at least) that I rarely experience moral dilemmas. They just don't happen all that often. I like to think I have a reasonably coherent moral framework, but do I really need one? Do I just lead a very morally-inert life? Or have abstruse thought experiments in moral philosophy equipped me with broader principles under which would-be moral dilemmas are resolved before they reach my conscious deliberation?

To make sure I'm not giving too much weight to my own experiences, I thought I'd put a few questions to a wider audience:

- What kind of moral dilemmas do you actually encounter?

- Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?

- Do you have any examples of pedestrian moral dilemmas to which you've applied abstract moral reasoning? How did that work out?

- Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?

The Username/password anonymous account is, as always, available.

Thoughts on minimizing designer baby drama

17 John_Maxwell_IV 12 May 2015 11:22AM

I previously wrote a post hypothesizing that inter-group conflict is more common when most humans belong to readily identifiable, discrete factions.

This seems relevant to the recent human gene editing advance.  Full human gene editing capability probably won't come soon, but this got me thinking anyway.  Consider the following two scenarios:

1. Designer babies become socially acceptable and widespread some time in the near future.  Because our knowledge of the human genome is still maturing, they initially aren't that much different than regular humans.  As our knowledge matures, they get better and better.  Fortunately, there's a large population of "semi-enhanced" humans from the early days of designer babies to keep the peace between the "fully enhanced" and "not at all enhanced" factions.

2. Designer babies are considered socially unacceptable in many parts of the world.  Meanwhile, the technology needed to produce them continues to advance.  At a certain point people start having them anyway.  By this point the technology has advanced to the point where designer babies clearly outclass regular babies at everything, and there's a schism between "fully enhanced" and "not at all enhanced" humans.

Of course, there's another scenario where designer babies just never become widespread.  But that seems like an unstable equilibrium given the 100+ sovereign countries in the world, each with their own set of laws, and the desire of parents everywhere to give birth to the best kids possible.

We already see tons of drama related to the current inequalities between individuals, especially inequality that's allegedly genetic in origin.  Designer babies might shape up to be the greatest internet flame war of this century.  This flame war could spill over in to real world violence.  But since one of the parties has not arrived to the flame war yet, maybe we can prepare.

One way to prepare might be differential technological development.  In particular, maybe it's possible to decrease the cost of gene editing/selection technologies while retarding advances in our knowledge of which genes contribute to intelligence.  This could allow designer baby technology to become socially acceptable and widespread before "fully enhanced" humans were possible.  Just as with emulations, a slow societal transition seems preferable to a fast one.

Other ideas (edit: speculative!): extend the benefits of designer babies to everyone for free regardless of their social class.  Push for mandatory birth control technology so unwanted and therefore unenhanced babies are no longer a thing.  (Imagine how lousy it would be to be born as an unwanted child in a world where everyone was enhanced except you.)  Require designer babies to possess genes for compassion, benevolence, and reflectiveness by law, and try to discover those genes before we discover genes for intelligence.  (Edit: leaning towards reflectiveness being the most important of these.)  (Researching the genetic basis of psychopathy to prevent enhanced psychopaths also seems like a good idea... although I guess this would also create the knowledge necessary to deliberately create psychopaths?)  Regulate the modification of genes like height if game theory suggests allowing arbitrary modifications to them would be a bad idea.

I don't know very much about the details of these technologies, and I'm open to radically revising my views if I'm missing something important.  Please tell me if there's anything I got wrong in the comments.

Philosophy professors fail on basic philosophy problems

16 shminux 15 July 2015 06:41PM

Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that.

Philosophy professors, however, are a different story. Cosmologist Sean Carroll tweeted a link to a paper from the Harvard Moral Psychology Research Lab, which found that professional moral philosophers are no less subject to the effects of framing and order of presentation on the Trolley Problem than non-philosophers. This seems as basic an error as, say, confusing energy with momentum, or mixing up units on a physics test.

Abstract:

We examined the effects of framing and order of presentation on professional philosophers’ judgments about a moral puzzle case (the “trolley problem”) and a version of the Tversky & Kahneman “Asian disease” scenario. Professional philosophers exhibited substantial framing effects and order effects, and were no less subject to such effects than was a comparison group of non-philosopher academic participants. Framing and order effects were not reduced by a forced delay during which participants were encouraged to consider “different variants of the scenario or different ways of describing the case”. Nor were framing and order effects lower among participants reporting familiarity with the trolley problem or with loss-aversion framing effects, nor among those reporting having had a stable opinion on the issues before participating the experiment, nor among those reporting expertise on the very issues in question. Thus, for these scenario types, neither framing effects nor order effects appear to be reduced even by high levels of academic expertise.

Some quotes (emphasis mine):

When scenario pairs were presented in order AB, participants responded differently than when the same scenario pairs were presented in order BA, and the philosophers showed no less of a shift than did the comparison groups, across several types of scenario.

[...] we could find no level of philosophical expertise that reduced the size of the order effects or the framing effects on judgments of specific cases. Across the board, professional philosophers (94% with PhD’s) showed about the same size order and framing effects as similarly educated non-philosophers. Nor were order effects and framing effects reduced by assignment to a condition enforcing a delay before responding and encouraging participants to reflect on “different variants of the scenario or different ways of describing the case”. Nor were order effects any smaller for the majority of philosopher participants reporting antecedent familiarity with the issues. Nor were order effects any smaller for the minority of philosopher participants reporting expertise on the very issues under investigation. Nor were order effects any smaller for the minority of philosopher participants reporting that before participating in our experiment they had stable views about the issues under investigation.

I am confused... I assumed that an expert in moral philosophy would not fall prey to the relevant biases so easily... What is going on?

 

View more: Next