LWers living in Boulder/Denver area: any interest in an AI-philosophy reading group?
I'd like to put together an AI-philosophy reading group. Ideally we would dig right in to the technical and philosophical topics, not glossing over the details (meaning we might read technical reports, textbook chapters, theses, etc.), and meeting as often as convenient. It'd be best to keep the group as small as possible, and I'm willing to take a leadership role in organizing and presenting (though by no means do I insist on being the main voice).
Potential discussion topics: machine ethics, theory of superintelligence, friendliness in artificial agents, philosophy of logic, really anything related to the building of a mind.
Luck II: Expecting White Swans
When we left off last time, we had discussed two of the four principles which Dr. Richard Wiseman believes accounts for the differences between lucky and unlucky people: do more to maximize the number of chance opportunities you have and take steps to improve your intuition. In this post we will discuss the third principle: expect good luck. We will talk a little about principle four, but it's going to get its own post.
But before all that, I want to spend the first bit of this post clearing up some of the confusion which resulted from my less-than-perfect presentation. My most persistent critic, Lumifer, made a number of claims which I think are worth addressing. As I understand it, here is the meat of the criticism:
1) Luck should be defined as the benefit one receives from events they have no control over, not the result of systematic differences between lucky and unlucky people.
2) The luck being discussed in the post was entirely the result of self-perception. More neurotic people are of course going to think themselves less lucky and are going to be less open to novel experiences. Whether or not this is true is another matter.
3) Related to point 2), perhaps lucky people are simply the people located in the thin strip on the right of a certain bell curve, and so they (naturally) consider themselves lucky. In other words, maybe the arrow of causality is pointing: [objectively lucky] --> [acts differently], rather than the reverse. This point will crop up repeatedly.
We could argue definitions all day, but let it be known that when I talk about luck I mean the differences between lucky and unlucky people which are a result of differences in their behavior. When I talk about lucky people I mean people who self-describe as lucky and for whom there is weak anecdotal evidence of their luck. After reading Dr. Wiseman's book I have a high confidence level that such behavioral differences exist, a high confidence level that they matter, and a slightly less high confidence level that they can be taught.
Of course, there isn't much a person can do to prevent their being killed by a meteorite, and nothing at all a person can do to stop themselves being born with Down Syndrome. But taken to an extreme, points 2) and 3) seem equivalent to saying that there is simply nothing a person can do to increase the likelihood that they will not be made a fool of by Lady Luck. That seems unwarranted to me, not least because Dr. Wiseman appears to have been able to teach some people the skill of luck.
There are better and worse ways of improving your bench press, better and worse ways of learning a foreign language...why wouldn't there be better and worse ways of improving the odds that you'll be exposed to positive random events (or, alternatively, decreasing the odds that you'll be exposed to negative randomness)? I think this case is bolstered by the fact that, at least according to the testimony of lucky people, their good fortune is spread out among many different areas of their life. It would be one thing if these 'lucky' people had gotten a break in their career or lucked out in their choice of marriage partners, but many of them seem to be lucky almost across the board. This still doesn't rule out pure, unadulterated chance, but I think it makes person-specific causes more plausible.
Granted, Dr. Wiseman's evidence comes mostly in the form of anecdotes, which is not particularly strong evidence. But it's more than no evidence. Establishing that people who think they are lucky really are objectively lucky at anything like p < .05 would require a monumental longitudinal study which, to my knowledge, no one has even come close to doing. Nevertheless, it's my impression that Dr. Wiseman made an honest effort at epistemic cleanliness, utilizing numerous questionnaires, tests, interviews, and actual experiments to tease apart causal threads, establishing that there may well be behaviors which lead to more luck.
It's not a mathematical proof, but I think there is a good dose of truth to it, and I think it's useful.
With that out of the way, you'll recall that the four principles and twelve sub-principles are:
Principle One: Maximize the number of chance opportunities you have in life.
sub-principle one: lucky people maintain a network of contacts with other people.
sub-principle two: lucky people are more relaxed and less neurotic than unlucky people
sub-principle three: lucky people have a strong drive towards novelty, and strive to introduce variety into their routines.
Principle Two: Use your intuition to make important decisions.
sub-principle one: pay attention to your hunches.
sub-principle two: try and make your intuition more accurate.
Principle Three: Expect good fortune.
sub-principle one: lucky people believe their luck will continue.
sub-principle two: lucky people attempt to achieve their goals and persist through difficulty.
sub-principle three: lucky people think their interactions will be positive and successful.
Principle Four: Turn bad luck into good.
sub-principle one: lucky people see the silver lining in bad situations.
sub-principle two: lucky people believe that things will work out for them in the long run.
sub-principle three: lucky people spend less time brooding over bad luck.
sub-principle four: lucky people are more proactive in learning from their mistakes and preventing further bad luck.
Great Expectations
If you look at principle three and four, you'll see that most of the sub-principles have to do with what lucky people think will happen in the future. When given a set of questionnaires which tested respondents belief that they would experience positive and negative events in the future, we again find stark differences between lucky and unlucky people. Overwhelmingly, lucky people were more likely than unlucky people to believe they would have a good time on vacation, be admired for their accomplishments, develop good relationships with their families, etc. Conversely, unlucky people were more likely to believe that they would become overweight later in life, decide that they’d chosen the wrong career, be mugged, etc.
Maybe this is straight forward inductive inference: if you've mostly had bad or good luck in the past, it makes sense to believe that this will continue into the future. But if psychology were this crisp and simple, life would be a lot easier. Besides all the heuristics and biases that cloud thinking, our expectations about the future feed back into the causal matrix which determines our behaviors, influencing both what actually happens to us as well as how we interpret what happens to us. Each of these will be important to our discussion.
Making self-fulfillment work for you (?)
So what results when two groups of people vary in terms of their expectations for the future if we grant that these expectations exert some influence (however small) on what happens to them?
Dr. Wiseman believes that lucky people's positive expectations account for the fact that they are often very persistent in the face of adversity, and that this leads to self-fulfilling prophecies of success. When he gave three lucky and three unlucky people a very difficult puzzle to solve, two of the lucky people spent significantly longer working on the puzzle than the unlucky people, around 20 minutes vs. 1 hour +, respectively. (one of the lucky people miscounted the number of puzzle pieces and, believing one to be missing and thus the puzzle to be impossible, didn't even begin)! Quotes from interviews with lucky and unlucky people offer evidence that lucky people often spend more time chasing their ambitions while unlucky people have in some cases stopped even trying.
But are lucky people more persistent because of their beliefs that the future is bright, or could it be the case that lucky people were simply more persistent as a matter of their personal psychology?
A more clear-cut example comes from the realm of interpersonal interaction. Here, it turns out, we have good evidence for the power of self-fulfilling prophecies. Several famous studies have demonstrated that the beliefs you have when you enter into an interaction can profoundly shape the course of that interaction. Dougherty et al., (1994) found that, when people interviewing candidates for a job had high expectations for the candidates, they were friendly, and the candidates thus made a better impression. Still more powerfully, Snyder et al., (1977) demonstrated that when men thought they were talking to an attractive woman, not only did they act more warmly towards the woman, and not only did she respond more sociably, but other people listening to only the woman's part of the conversation also thought she was more attractive.
Did Dr. Wiseman's research yield any new insights into this area? Anecdotes included in the book paint a picture of lucky people's ability to quickly form warm and close relationships with people, allegedly on the basis of their expectations that other people will be interesting, funny, etc.
Perhaps lucky people's beliefs that their interactions will be positive actually lead to positive interactions, and independent research indicates that there is something to this. But recall from my last post that lucky people also smile, make better eye contact, and have friendlier body language than unlucky people, and maybe this accounts for their good experiences with people. Or they could have just always been lucky with respect to their interactions and thus believe this state of affairs will continue. Unfortunately, I feel that Dr. Wiseman's work did little to clarify these underlying issues.
That said, I do think that there are two valuable things to learn here: 1) don't give up hope too early, and 2) people's expectations of others powerfully influence how their interactions unfold.
Remember how during the last essay I said that some people may worry that principle one ('maximize the number of chance opportunities you have') might also expose you to a lot of black swans? Well, persistence is one reason why this isn't such a big problem. With enough hard work, a gray or even black swan encounter can be made into a white swan (though I freely admit any rational person has to know when to give up). There is a bigger reason than this, though, but it'll have to wait for the next post because this one has gotten long enough.
Suggested Exercises
As with the first two principles, Dr. Wiseman recommends the following exercises:
-Begin each day with positive affirmations, of the "I know that I will be lucky in the future" variety.
-Make a list of your short, medium, and long-term goals, reviewing the list periodically. This helps establish high expectations for the future.
-To maintain motivation, write down the costs and benefits associated with achieving a goal. Having a concrete analysis to look at should help you persist, assuming that the benefits actually do outweigh the costs.
-With a potentially difficult situation on the horizon, like a date or job interview, spend a few minutes visualizing yourself confidently and successfully navigating it.
Criticisms and open questions
I'll come right out and say it: I thought this section was weaker than the others, and less useful to readers of this blog. There's so much mushy-headed nonsense out there about how 'perception is reality' and you should 'visualize your way into wealth' that when I read the the title of principle three ('expect good luck') my eyes glazed over a bit.
Still. Goals, emotions, expectations. These are as much a part of the fabric of the world as chairs are, and we can no more ignore them than we can any of the other threads in that tapestry. If it is true that what I will think will happen affects what will happen, even if those expectations aren't based on anything particularly rational, then I want to believe that that is the case, and plan my life accordingly.
So I ask:
1) Might there be domains where there is a slight negative expected utility for accuracy of belief, at least at the levels of rationality attainable by humans now (see: discussions of the valley of bad rationality)? For a true master of the mature art of human rationality, a person who has a detailed self-model and very accurate probability estimates, there would presumably be no reason to fiddle with expectations; these would flow naturally from their beliefs about the world. But since I don't yet have anything like that, maybe it's a good idea for me to purposefully try to make myself believe that the future will be good.
2) Can a person have a belief in self-fulfilling belief? If you know about self-fulfilling prophecies, does that make you better or worse at making them happen?
3) Let's say I'm an objectively, physically unattractive person, but because of positive attention I received during childhood I believe myself to be attractive and thus have moderate success in dating. Is my belief in my own attractiveness warranted? Does the answer change if, instead of being based on childhood experiences, I believe I'm attractive because I chanted "I am attractive and deserving of love" ten times before I left the house every morning?
4) Is it ethical to exploit this knowledge, even if you're doing it to make another person more successful? When, if ever, is it appropriate to put down the mantle of rationality and let people believe silly things (or even actively encourage them)? One possible example: when giving a pep talk to beleaguered troops in the minutes before a battle.
Luck I: Finding White Swans
Quoth the Master, great in Wisdom, to the Novice: "Ye, carry with thee all thy days a cheque folded up in your wallet. For there may be many situations in which thou shalt have need of it."
And the Novice, of high intelligence but lesser wisdom, replied, saying unto the Master: "Of what situations dost thou speak?"
To which the Master replied: "imagine that thou dost come upon a nice piece of land, and wish to make a down payment on it. The real estate market moveth quickly in these troubled economic times, and you may soon find your opportunity dried up like dead leaves in summer. What would you do?" The Master, you see, did dabble in real estate development a little, and his knowledge was deep in these matters.
The Novice thought for a moment, saying: "But always I carry with me a credit card. Surely this is sufficient for my purposes."
And the Master replied: "Thou knoweth not the ways of commerce. Thinketh thee that all dealings are conducted within feet of a machine that can read credit cards?!"
The Novice knew the ways of Traditional Rationality and Skepticism, and felt it his duty to take the opposite stance to the Master, lest he unthinkingly obey an authority figure. Undeterred, he replied, saying unto the Master: "But always I carry with me cash. Surely this is sufficient for my purposes."
Upon hearing this, the Master did reply, incredulously: "Would thee carry with thee always an amount of cash equal to the reasonable asking price of a down payment for a piece of land?!"
And lo, the Novice did understand, though he could not put it into these words, that the Master did speak of a certain stance with respect to the unknown. The swirling chaos of reality may be impossible to predict, but there are things an aspiring empirimancer can do to make it more likely that ve will have good fortune.
Verily, know that that which people call 'luck' is not the smile of a beneficent god, but the outcome of how some people interact with chance.
______________________________________________________________________________________________________________
Consider for a moment two real people, whom we will call ''Martin" and "Brenda", that considers themselves lucky and unlucky, respectively. Both are part of the group of exceptionally lucky/unlucky people which psychologist Dr. Richard Wiseman has assembled to try and scientifically study the phenomenon of luck.
(The following is taken from his book "The Luck Factor", and interested parties should go there for more information.)
As part of the research, both people were placed in identical, fortuitous circumstances, but both handled the situation very differently. The setting: a small coffee shop, arranged so that there were four tables with a confederate (someone who knows about the experiment) sitting at each table. One of these confederates was a wealthy businessman, the kind of person that, should you happen to meet him in real life and make a good impression, could set you up with a well-paying job. All the confederates were told to act the same way for both Brenda and Martin. On the street right outside the coffee shop, the researchers placed a £5 note.
Brenda and Martin were told to go to the coffee shop at different times, and their behavior was covertly filmed. Martin noticed the money sitting on the street and picked it up. When he went into the coffee shop he sat down next to the businessman and struck up a conversation, even offering to buy him a coffee. Brenda walked past the money, never noticing it, and sat quietly in the shop without talking to anyone.
Fortune favors the...?
There are obvious differences in Brenda and Martin's behavior, but are they indicative of more far-reaching differences in how lucky and unlucky people live their lives? First, let's discuss what doesn't differentiate lucky from unlucky people. Wiseman, having assembled his initial group of subjects, tested them on two traits which could have an impact on luck: intelligence and psychic ability. Determining that intelligence wasn't a factor was as easy as administering an intelligence test. Psychic ability was ruled out by having both lucky and unlucky people pick lottery numbers, with the result being that neither group was more successful than the other.
Wiseman further tested for differences in personality using the Five Factor Model of Personality, which you will recall breaks personality up into Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (the acronym OCEAN makes for easy recall) . Lucky and unlucky people showed no differences in Conscientiousness or Agreeableness, but did show differences in Openness, Extraversion, and Neuroticism. It is here that an interesting picture began to emerge.
Ultimately, Wiseman was able to break luck down into four overarching principles and twelve subprinciples, summarized here:
Principle One: Maximize the number of chance opportunities you have in life.
sub-principle one: lucky people maintain a network of contacts with other people.
sub-principle two: lucky people are more relaxed and less neurotic than unlucky people
sub-principle three: lucky people have a strong drive towards novelty, and strive to introduce variety into their routines.
Principle Two: Use your intuition to make important decisions.
sub-principle one: pay attention to your hunches.
sub-principle two: try and make your intuition more accurate.
Principle Three: Expect good fortune.
sub-principle one: lucky people believe their lucky will continue.
sub-principle two: lucky people attempt to achieve their goals and persist through difficulty.
sub-principle three: lucky people think their interactions will be positive and successful.
Principle Four: Turn bad luck into good.
sub-principle one: lucky people see the silver lining in bad situations.
sub-principle two: lucky people believe that things will work out for them in the long run.
sub-principle three: lucky people spend less time brooding over bad luck.
sub-principle four: lucky people are more proactive in learning from their mistakes and preventing further bad luck.
I suspect that LWers will have a unique set of reactions to and problems with each of these principles, so let's take them one at a time. In this essay, I will examine the first two.
Facing up to randomness
First, how would you go about increasing the likelihood of positive chance encounters? Well, you could start spending more time talking to strangers and making friends with people. Indeed, one of the important differences between unlucky and lucky people is that lucky people are more outgoing, more friendly and open in their body language (lucky people smiled and made eye contact far, far more often), and keep in touch with people they meet longer. The age-old adage 'it's not what you know, but who you know' has more than a grain of truth in it, and a great way to get to know the right people is by simply getting to know more people, period. The chances of any given person being the contact you need are pretty slim, but the odds improve with every person you get to know.
This actually works on several levels. Since the complexity of the world greatly exceeds the cognitive abilities of any one person, cultivating a strong social network positions you to take advantage of the knowledge and experience of others. Even if you are so much smarter than person X that they can't compete with you along any dimension, they may still have information you don't, or they may know somebody who knows somebody who can help you out.
Moreover, I'm sure everyone is familiar with the experience of struggling with a problem, only to have a random conversation (with a stranger or a friend) shake loose a key insight. This can happen locally inside your own head when you have the necessary raw material laying around but haven't seen a certain connection. In this situation you would have eventually hit upon the insight but the process has been expedited. More valuable still is when two or more people enter a conversation that produces an insight that nobody had the necessary components to produce for themselves; I think this is part of what Matt Ridley means when he talks about ideas having sex.
So you're doing your best to meet more people and flex your extroversion muscles. Next, you might try and be more spontaneous and random in your life. Wiseman notes that many lucky people have a strong orientation towards variety and novel experiences. Some of them, facing an important decision like which car to buy, will do something like list their options on a piece of paper and then roll a die.
You don't need to go quite this far; it's also acceptable to shop different places, take different routes to work, or pick a new part of the city to explore every month. The takeaway here is that it's difficult to have positive chance encounters if you always do the same thing.
One of my favorite examples of someone positioning themselves to benefit from chance comes from HPMoR, when Harry and Hermione first read all the titles of the books in the library and then read all the tables of contents. From their point of view the books in the library are a vast store of unknown information, any bit of which they might need at a given time. Since reading every single book isn't an option, familiarizing themselves with the information in a systematic way means creating many potential sources of insight while simultaneously reducing the cost of doing future research. Hacker Eric Raymond made related point in the context of winning table-top board games:
I made chance work for me. Pay attention, because I am about to reveal why there is a large class of games (notably pick-up-and-carry games like Empire Builder, network-building games like Power Grid, and more generally games with a large variety of paths to the win condition) at which I am extremely difficult to beat. The technique is replicable.
I have a rule: when in doubt, play to maximize the breadth of your option tree. Actually, you should often choose option-maximizing moves over moves with a slightly higher immediate payoff, especially early in the game and most especially if the effect of investing in options is cumulative.
What's the common thread between extroversion, skimming the library shelves, and beating your friends at boardgames? Certain actions and certain states of mind make it more likely you'll benefit from white swans.
(Clever readers may be saying to themselves: "okay, but doesn't all this also make the chances of encountering black swans higher as well?" We will address these concerns when we talk about principles three and four.)
Attitude matters
We've covered extraversion and openness, but the lucky people Dr. Wiseman interviewed were also more relaxed and less neurotic than the unlucky ones. This has obvious consequences for when you are trying to meet new people, but research also hints that being less anxious may make you more likely to notice things you aren't specifically looking for. This is probably why several of Dr. Wiseman's lucky participants remarked on how often they found money on the street, found great opportunities while listening to the radio or reading the newspaper, and in general stumbled over opportunities in places where other people simply failed to notice them.
This attitude undergirds and complements much of what I discussed in the previous section; while you are trying to maximize your pathways to victory, don't forget that constantly worrying and mentally spinning your tires will make you less likely to see a chance opportunity.
Pump your intuition
Lucky people tend to have strong intuitions, and they have a habit of paying careful attention to them. I'm sure you're skeptical of this advice, as I was when I first started reading this section. Given present company I don't think I need to reiterate all the billion ways intuition can be derailed and misleading. That said, placing intuition and rationality as orthogonal to one another is a good example of the straw vulcan of rationality. Intuitions are of course not always wrong, and in some cases may be the only source of information a person has to go off of.
Two things put a little nuance on the proposition that you should listen to your intuitions. The first is that, as far as I can tell, lucky people don't trust their intuitions immediately and absolutely. They don't stand at a busy intersection, blindfolded, and trust their gut to tell them when it's safe to cross. Rather, their hunches act more like yellow traffic lights, telling them that they should proceed with caution here or do a bit more research there. In other words, it sounds to me like lucky people treat their intuitions in a pretty rational manner, as data points, to be used but not relied upon in isolation unless there is just nothing else available.
The other thing is that many lucky people take steps to sharpen their intuitions, utilizing quiet solitude or meditation. Dr. Wiseman goes into precious little detail about this, including just a few anecdotal descriptions of people's efforts to clear their mind. The rationalist community will be familiar with more quantitative methods like predictionbook, and googling for 'improving your intuitions' turned up about as much garbage as you'd probably expect. If anyone has leads to legitimate research on improving intuition, I'd be happy to add an addendum.
Suggested exercises
Throughout the book Dr. Wiseman includes exercises which are meant to help people utilize the principles uncovered in his research to become luckier. Here are the suggested exercises for the topics discussed in this post:
-To enhance your extraversion, strike up a conversation with four people you either don't know or don't know well. Do this each week for a month. Additionally, every week make contact with a person you haven't spoken to in a while.
-To relax, find a quiet place and picture yourself in a beautiful, calming scene. Make sure to visualize each and every detail of the location, including whatever sounds and smells are around you. When you've got the scene in place, visualize the tension leaving your body in the form of a liquid flowing out of you, starting with your head. once you feel sufficiently relaxed, slowly open your eyes.
-Inject some randomness in your life by making a list of 6 new experiences. These can be anything from trying a new type of food to taking a class on a subject you've always been interested in. Number them 1 to 6, roll a die, and then do whatever corresponds to the number you rolled.
This essay can also be found at Rulers To the Sky.
Existential Risk II
Meta
-This is not a duplicate of the original less wrong x-risk primer. I like lukeprog's article just fine, but it works mostly as a punch in the gut for anyone who needs a wake up call. Very little of the actual research on x-risk is discussed in that article, so the gap that was there before it was published was largely there after. My article and his would work well being read together.
-This was originally written to accompany a presentation I gave, hence the random inclusion of both hyperlinks and citations. It also lives, with minor differences, here.
-Summary: For various reasons the future is scarier than a lot of people realize. All sorts of things could lead to the destruction of the human species, ranging from asteroid impacts to runaway AIs, and these things are united by the fact that any one of them could destroy the value of the future from a human perspective. The dangers can be separated into bangs (very sudden extinction), crunches (not fatal but crippling), shrieks (mostly curse with a little blessing), and whimpers (a long, slow fading), though there is nothing sacred about these categories. Some humans have are trying to prevent this, though their methods are still in their infancy. Much more should be done to support them.
In the beginning
I want to start this off with a quote, which nicely captures both how I use to feel about the idea of human extinction and how I feel about it now:
I think many atheists still trust in God. They say there is no God, but …[a]sk them how they think the future will go, especially with regards to Moral Progress, Human Evolution, Technological Progress, etc. There are a few different answers you will get: Some people just don’t know or don’t care. Some people will tell you stories of glorious progress… The ones who tell stories are the ones who haven’t quite internalized that there is no god. The people who don’t care aren’t paying attention. The correct answer is not nervous excitement, or world-weary cynicism, it is fear. -Nyan Sandwich
Back when I was a Christian I gave some thought to the rapture, which is not entirely unlike extinction as far as most ten-year-olds can tell. Sometime during this period I found a slim little book of fiction which portrayed a damned soul's experience of burning in hell forever, and that did scare me. Such torment, as luck would have it, is easy enough to avoid if you just call god the right name and ask forgiveness often enough.
When I was old enough to contemplate possible secular origins of the apocalypse, I was both an atheist and one of the people who tell glorious stories about the future. The potential fruits of technological development, from the end of aging to the creation of a benevolent super-human AI, excited me, and still excite me now. No doubt I would've admitted the possibility of human extinction, I don't really remember. But there wasn't the kind of internal siren that should go off when you start thinking seriously about one of the Worst Possible Outcomes. That I would remember.
But as I've gotten older I've come to appreciate that most of us are not afraid enough of the future. Those who are afraid, are often afraid for the wrong reasons.
What is an Existential Risk?
An existential risk or x-risk (to use a common abbreviation) is "...one that threatens to annihilate Earth-originating intelligent life or permanently and drastically to curtail its potential" (Bostrom 2006). The definition contains some subtlety, as not all x-risks involve the outright death of every human. Some could take potentially eons to complete, and some are even survivable. Positioning x-risks within the broader landscape of risks yields something like this chart:
At the top right extreme is where Cthulu sleeps. They are risks that carry the potential to drastically and negatively affect this and every subsequent human generation. So as not to keep everyone in suspense, let's use this chart to put a face on the shadows.
Four Types of Existential Risks
Philosopher Nick Bostrom has outlined four broad categories of x-risk. In more recent papers he hasn't used the terminology that I'm using here, so maybe he thinks the names are obsolete. I find them evocative and useful, however, so I'll stick with them until I have a reason to change.
Bangs are probably the easiest risks to conceptualize. Any event which causes the sudden and complete extinction of humanity would count as a Bang. Think asteroid impacts, supervolcanic eruptions, or intentionally misused nanoweapons.
Crunches are risks which humans survive but which leaves us permanently unable to navigate to a more valuable future. An example might be depleting our planetary resources before we manage to build the infrastructure needed to mine asteroids or colonize other planets. After all the die-offs and fighting, some remnant of humanity could probably survive indefinitely, but it wouldn't be a world you'd want to wake up in.
Shrieks occur when a post-human civilization develops but only manages to realize a small amount of its potential. Shrieks are very difficult to effectively categorize, and I'm going to leave examples until the discussion below.
Whimpers are really long-term existential risks. The most straight forward is the heat death of the universe; within our current understanding of physics, no matter how advanced we get we will eventually be unable to escape the ravages of entropy. Another could be if we encounter a hostile alien civilization that decides to conquer us after we've already colonized the galaxy. Such a process could take a long time, and thus would count as a whimper.
Just because whimpers are so much less immediate than other categories of risk and x-risk doesn't automatically mean we can just ignore them; it has been argued that affecting the far future is one of the most important projects facing humanity, and thus we should take the time to do it right.
Sharp readers will no doubt have noticed that there is quite a bit of fuzziness to these classifications. Where, for example, should we put all-out nuclear war, the establishment of an oppressive global dictatorship, or the development of a dangerous and uncontrollable superintelligent AI? If everyone dies in the war it counts as a bang, but if it makes a nightmare of the biosphere while leaving a good fraction of humanity intact it would be a crunch. A global dictatorship wouldn't be an x-risk unless it used some (probably technological) means to achieve near-total control and long-term stability, in which case it would be a crunch. But it isn't hard to imagine such a situation in which some parts of life did get better, like if a violently oppressive government continued to develop advanced medicines so that citizens were universally healthier and longer-lived than people today. If that happened, it would be a Shriek. A similar analysis applies to the AI, with the possible outcomes being Bang, Crunch, and Shriek depending on just how badly we misprogrammed it.
What Ties These Threads Together?
Even if you think existential threats deserve more attention, the rationale for treating them as a diverse but unified phenomenon may not be obvious. In addition to the crucial but (relatively) straightforward work of, say, tracking Near-Earth Objects (NEOs), existential risk researchers also think seriously about alien invasions and rogue AIs. With such a range of speculativeness, why group x-risks together at all?
It turns out that they share a cluster of features which does give them some cohesion and make them worth studying under a single label, not all of which I discuss here. First and most obvious is that should any of them occur the consequences would be truly vast relative to any other kind of risk. To see why, think about the difference between a catastrophe that kills 99% of humanity and one that kills 100%. As big a tragedy as the former would be, there's a chance humans could recover and build a post-human civilization. But if every person dies, then the entire value of our future is lost (Bostrom 2013).
Second, these are not risks which admit of a trial and error approach. Pretty much by definition a collision with an x-risk will spell doom for humanity, and so we must be more proactive in our strategies for reducing them. Related to this, we as a species have neither the cultural nor biological instincts needed to prepare us for the possibility of extinction. A group of people might live through several droughts and thus develop strong collective norms towards planning ahead and keeping generous food reserves. But they cannot have gone extinct multiple times, and thus they can't rely on their shared experience and cultural memory to guide them in the future. I certainly hope we can develop a set of norms and institutions which makes us all safer, but we can't wait to learn from history. We're going to have to start well in advance, or we won't survive.
A final commonality I'll mention is that the solutions to quite a number of x-risks are themselves x-risks. A powerful enough government could effectively halt research into dangerous pathogens or nano-replicators. But given how States have generally comported themselves in the past, one would do well to be cautious before investing them with that kind of power. Ditto for a superhuman AI, which could set up an infrastructure to protect us from asteroids, nuclear war, or even other less Friendly AI. Get the coding just a little wrong, though, and it might reuse your carbon to make paperclips.
It is indeed a knife edge along which we creep towards the future.
Measuring the Monsters
A first step is getting straight about how likely survival is. The reader may have encountered predictions of the "we have only a 50% chance of surviving the next hundred years" variety. Examining the validity of such estimates is worth doing, but I won't be taking up that challenge here; I tend to agree that these figures involves a lot of subjective judgement, but that even if the chances were very very small it would still be worth taking seriously (Bostrom 2006). At any rate, it seems to me that trying to calculate an overall likelihood of human extinction is going to be premature before we've nailed down probabilities for some of the different possible extinction scenarios. It is to the techniques which x-risk researchers rely on to try and do this that I now turn.
X-risk-assessments rely on both direct and indirect methods (Bostrom 2002). Using a direct method involves building a detailed causal model of the phenomenon and using that to generate a risk probability, while indirect methods include arguments, thought experiments, and information that we use to constrain and refine our guesses.
As far as I know for some x-risks we could use direct methods if we just had a way to gather the relevant information. If we knew where all the NEOs were we could use settled physics to predict whether any of them posed a threat and then prioritize accordingly. But we don't where they all are, so we might instead examine the frequency of impacts throughout the history of the Earth and then reason about whether or not we think an impact will happen soon. It would be nice to exclusively use direct methods, but we supplement with indirect methods when we can't, and of course for x-risks like AI we are in an even more uncertain position than we are for NEOs.
The Fermi Paradox
Applying indirect methods can lead to some strange and counter-intuitive territory, an example of which is the mysteries surrounding the Fermi Paradox. The central question is: in a universe with so many potential hotbeds of life, why is it that when we listen for stirring in the void all we hear is silence? Many feel that the universe must be teeming with life, some of it intelligent, so why haven't we see any sign of it yet?
Musing about possible solutions to the Fermi Paradox can be a lot of fun, and it's worth pointing out that we haven't been looking that long or that hard for signals yet. Nevertheless I think the argument has some meat to it.
Observing this state of affairs, some have postulated the existence of at least one Great Filter, a step in the chain of development from the first organisms to space-faring civilizations that must be extremely hard to achieve.
This is cause for concern because the Great Filter could be in front of us or behind us. Let me explain: imagine a continuum with the simplest self-replicating molecules on one side and the Star Trek Enterprise on the other. From our position on the continuum we want to know whether or not we have already passed one of the hardest steps, but we have only our own planet to look at. So imagine that we send out probes to thousands of different worlds in the hopes that we will learn something.
If we find lots of simple eukaryotes that means that the Great Filter is probably not before the development of membrane-bound organelles. The list of possible places on the continuum the Great Filter could be shrinks just a little bit. If instead we find lots of mammals and reptiles (or creatures that are very different but about as advanced), that means the Great Filter is probably not before the rise of complex organisms, so the places the Great Filter might be hiding shrinks again. Worst of all would be if we find the dead ruins of many different advanced civilizations. This would imply that the real killer is yet to come, and we will almost certainly not survive it.
As happy as many people would be to discover evidence of life in the universe, a case has been made that we should hope to find only barren rocks waiting for us in the final frontier. If not even simple bacteria evolve on most worlds, then there is still a chance that the Great Filter is behind us, and we can worry only about the new challenges ahead, which may or not be Filters as great as the ones in the past.
If all this seems really abstract out there, that's because it is. But I hope it is clear how this sort of thinking can help us interpret new data, make better guesses, form new hypotheses, etc. When dealing with stakes this high and information this limited, one must do the best they can with what's available.
Mitigation
What priority should we place on reducing existential risk and how can we do that? I don't know of anyone who thinks all our effort should go towards mitigating x-risks; there are lots of pressing issues which are not x-risks that are worth our attention, like abject poverty or geopolitical instability. But I feel comfortable saying we aren't doing nearly as much as we should be. Given the stakes and the fact that there probably won't be a second chance we are going to have to meet x-risks head on and be aggressively proactive in mitigating them.
Suppose we taboo 'aggressively proactive', what's left? Well the first step, as it so often is, will be just to get the right people to be aware of the problem (Bostrom 2002). Thankfully this is starting to be the case as more funding and brain power go into existential risk reduction. We have to get to a point where we are spending at least as much time, energy, and effort making new technology safe as we do making it more powerful. More international cooperation on these matters will be necessary, and there should be some sort of mechanism by which efforts to develop existentially-threatening technologies like super-virulent pathogens can be stopped. I don't like recommending this at all, but almost anything is preferable to extinction.
In the meantime both research that directly reduces x-risk (like NEO detection), as well as research that will help elucidate deep and foundational issues in x-risk (FHI and MIRI) should be encouraged. It's a stereotype that research papers always end with a call for more research, but as was pointed out by lukeprog in a talk he gave, there's more research done on lipstick than on friendly AI. This generalizes to x-risk more broadly, and represents the truly worrying state of our priorities.
Conclusion
Though I maintain we should be more fearful of what's to come, that should not obscure the fact that the human potential is vast and truly exciting. If the right steps are taken, we and our descendants will have a future better than most can even dream of. Life spans measured in eons could be spent learning and loving in ways our terrestrial languages don't even have words for yet. The vision of a post-human civilization flinging it's trillions of descendants into the universe to light up the dark is tremendously inspiring. It's worth fighting for.
But we have much work ahead of us.
[LINKS] Killer Robots and Theories of Truth
Peter at the Conscious Entities blog wrote an essay on the problems with using autonomous robots for combat, and attempts to articulate some general principles which allow them to be used ethically. He says:
In essence I think there are four broad reasons why hypothetically we might think it right to be wary of killer robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.
Unpacking this a little, autonomous robots will affect the characteristics of war and make it easier for many to carry out, can be expected to malfunction in especially complex and open-ended situations in very serious ways, might be re-purposed for crime, and because for various reasons they make the ethics surrounding war even more dubious.
He even takes a stab at laying out restrictive principles which will help mitigate some of the danger in utilizing autonomous robots:
P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.
P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.
P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.
P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.
Though he is a non-expert in the field, I (also a non-expert) find his analysis capable and thorough, though I spotted some possible flaws. I mention it here at LessWrong because, while we may be decades away from superintelligent AI, work in AI risk and machine ethics is going to become especially important very soon as drones, robots, and other non-human combatants become more prevalent on battlefields all over the world.
Switching gears a bit, Massimo Pigliucci of Rationally Speaking fame lays out some common theories of truth and problems facing each one. If you've never heard of Charles Sanders Pierce and wouldn't know a verificationist account of truth if it hit you in the face, Massimo's article could be a good place to start getting some familiarity. It seems relevant because there has been some work on epistemology in these parts recently. And, as Massimo says:
...it turns out that it is not exactly straightforward to claim that science makes progress toward the truth about the natural world, because it is not clear that we have a good theory of truth to rely on; moreover, there are different conceptions of truth, some of which likely represent the best we can do to justify our intuitive sense that science does indeed make progress, but others that may constitute a better basis to judge progress (understood in a different fashion) in other fields — such as mathematics, logic, and of course, philosophy.
This matters for anyone who wants to know how things are, but is even more urgent for one who would create a truth-seeking artificial mind.
How to Have Space Correctly
[NOTE: This post has undergone substantial revisions following feedback in the comments section. The basic complaint was that it was too airy and light on concrete examples and recommendations. So I've said oops, applied the virtue of narrowness, gotten specific, and hopefully made this what it should've been the first time.]
Take a moment and picture a master surgeon about to begin an operation. Visualize the room (white, bright overhead lights), his clothes (green scrubs, white mask and gloves), the patient, under anesthesia and awaiting the first incision. There are several other people, maybe three or four, strategically placed and preparing for the task ahead. Visualize his tools - it's okay if you don't actually know what tools a surgeon uses, but imagine how they might be arranged. Do you picture them in a giant heap which the surgeon must dig through every time he wants something, or would they be arranged neatly (possibly in the order they'll be used) and where they can be identified instantly by sight? Visualize their working area. Would it be conducive to have random machines and equipment all over the place, or would every single item within arms reach be put there on purpose because it is relevant, with nothing left over to distract the team from their job for even a moment?
Space is important. You are a spatially extended being interacting with spatially extended objects which can and must be arranged spatially. In the same way it may not have occurred to you that there is a correct way to have things, it may not have occurred to you that space is something you can use poorly or well. The stakes aren't always as high as they are for a surgeon, and I'm sure there are plenty of productive people who don't do a single one of the things I'm going to talk about. But there are also skinny people who eat lots of cheesecake, and that doesn't mean cheesecake is good for you. Improving how you use the scarce resource of space can reduce task completion time, help in getting organized, make you less error-prone and forgetful, and free up some internal computational resources, among other things.
What Does Using Space Well Mean?
It means consciously manipulating the arrangement, visibility, prominence, etc. of objects in your environment to change how they affect cognition (yours or other people's). The Intelligent Use of Space (Kirsh, "The Intelligent Use of Space", 1995) is a great place to start if you're skeptical that there is anything here worth considering. It's my primary source for this post because it is thorough but not overly technical, contains lots of clear examples, and many of the related papers I read were about deeper theoretical issues.
The abstract of the paper reads:
How we manage the spatial arrangement of items around us is not an afterthought: it is an integral part of the way we think, plan, and behave. The proposed classification has three main categories: spatial arrangements that simplify choice; spatial arrangements that simplify perception; and spatial dynamics that simplify internal computation. The data for such a classification is drawn from videos of cooking, assembly and packing, everyday observations in supermarkets, workshops and playrooms, and experimental studies of subjects playing Tetris, the computer game. This study, therefore, focuses on interactive processes in the medium and short term: on how agents set up their workplace for particular tasks, and how they continuously manage that workplace.
The 'three main categories' of simplifying choice, perception, and internal computation can be further subdivided:
simplifying choice
reducing or emphasizing options.
creating the potential for useful new choices.
simplifying perception
clustering like objects.
marking an object.
enhancing perceptual ability.
simplfying internal computation
doing more outside of your head.
These sub-categories are easier to picture and thus more useful when trying to apply the concept of using space correctly, and I've provided more illustrations below. It's worth pointing out that (Kirsh, "The Intelligent Use of Space", 1995) only considered the behavior of experts. Perhaps effective space management partially explains expert's ability to do more of their processing offline and without much conscious planning. An obvious follow up would be in examining how novices utilize space and looking for discrepancies.
What Does Using Space Well Look Like?
The paper walks the reader through a variety of examples of good utilization of space. Consider an expert cook going through the process of making a salad with many different ingredients, and ask how you would accomplish the same task differently:
...one subject we videotaped, cut each vegetable into thin slices and laid them out in tidy rows. There was a row of tomatoes, of mushrooms, and of red peppers, each of different length...To understand why lining up the ingredients in well ordered, neatly separated rows is clever, requires understanding a fact about human psychophysics: estimation of length is easier and more reliable than estimation of area or volume. By using length to encode number she created a cue or signal in the world which she could accurately track. Laying out slices in lines allows more precise judgment of the property relative number remaining than clustering the slices into groups, or piling them up into heaps. Hence because of the way the human perceptual system works, lining up the slices creates an observable property that facilitates execution.
Here, the cook used clustering and clever arrangement to make better use of her eyes and to reduce the load on her working memory, techniques I use myself in my day job. As of this writing (2013) I'm teaching English in Korea. I have a desk, a bunch of books, pencils, erasers, the works. All the folders are together, the books are separated by level, and all ungraded homework is kept in its own place. At the start of the work day I take out all the books and folders I'll need for that day and arrange them in the same order as my classes. When I get done with a class the book goes back on the day's pile but rotated 90 degrees so that I can tell it's been used. When I'm totally done with a book and I've entered homework scores and such, it goes back in the main book stack where all my books are. I can tell at a glance which classes I've had, which ones I'll have, what order I'm in, which classes are finished but unprocessed, and which ones are finished and processed. Cthulu only knows how much time I save and how many errors I prevent all by utilizing space well.
These examples show how space can help you keep track of temporal order and make quick, accurate estimates, but it may not be clear how space can simplify choice. Recall that simplifying choice usually breaks down into either taking some choices away or making good choices more obvious. Taking choices away may sound like a bad thing, but each choice requires you to spend time evaluating options, and if you are juggling many different tasks the chance of making the wrong choice goes up. Similarly, looking for good options soaks up time, unless you can find a way to make yourself trip over them.
An example of removing bad decisions is in factory workers placing a rag on hot pipes so they know not to touch them (Kirsh, "The Intelligent Use of Space", 1995). And here is how some carpenters structure their work space so that they can make good uses for odds and ends easier to see:
In the course of making a piece of furniture one periodically tidies up. But not completely. Small pieces of wood are pushed into a corner or left about; tools, screw drivers and mallets are kept nearby. The reason most often reported is that 'they come in handy'. Scraps of wood can serve to protect surfaces from marring when clamped, hammered or put under pressure. They can elevate a piece when being lacquered to prevent sticking. The list goes on.
By symbolically marking a dangerous object the engineers are shutting down the class of actions which involves touching the pipe. It is all too easy in the course of juggling multiple aspects of a task to forget something like this and injure yourself. The strategically placed and obvious visual marker means that the environment keeps track of the danger for you. Likewise poisonous substances have clear warning labels and are kept away from anything you might eat; both precautions count as good use of space.
My copy of Steven Johnson's Where Good Ideas Come From is on another continent, but the carpenter example reminded me of his recommendation to keep messy notebooks. Doing so makes it more likely you'll see unusual and interesting connections between things you're thinking about. He goes so far as to use a tool called DevonThink which speeds this process up for him.
And while I'm at it, this also points to one advantage of having physical books over PDFs. My books take up space and are easier to see than their equivalent 1's and 0's on a hard drive, so I'm always reminded of what I have left to read. More than once I've gone on a useful tangent because the book title or cover image caught my attention, and more than one interesting conversation got started when a visitor was looking over my book collection. Scanning the shelves at a good university library is even better, kind of like 17th-century StumbleUpon, and English-language libraries are something I've sorely missed while I've been in Asia.
All this usefulness derives from the spatial properties and arrangement of books, and I have no idea how it can be replicated with the Kindle.
Specific Recommendations
You can see from the list of examples I've provided that there are a billion ways of incorporating these insights into work, life, and recreation. By discussing the concept I hope to have drawn your attention to the ways in which space is a resource, and I suspect just doing this is enough to get a lot of people to see how they can improve their use of space. Here are some more ideas, in no particular order:
-I put my alarm clock far enough away from my bed so that I have to actually get up to turn it off. This is so amazingly effective at ensuring I get up in the morning that I often hate my previous-night's self. Most of the time I can't go back to sleep even when I try.
-There's reason to suspect that a few extra monitors or a bigger display will make your life easier [Thanks Qiaochu_Yuan].
-When doing research for an article like this one, open up all the tabs you'll need for the project in a separate window and close each tab as you're done with it. You'll be less distracted by something irrelevant and you won't have to remember what you did or didn't read.
-Having a separate space to do something seems to greatly increase the chances I'll get it done. I tried not going to the gym for a while and just doing push ups in my house, managing to keep that up for all of a week or so. Recently, I switched gyms, and despite now having to take a bus all the way across town I make it to the gym 3-5 times a week, pretty much without fail. If your studying/hacking/meditation isn't going well, try going somewhere which exists only to give people a place to do that thing.
-Put whatever you can't afford to forget when you leave the house right by the door.
-If something is really distracting you, completely remove it from the environment temporarily. During one particularly strenuous finals in college I not only turned off the xbox, I completely unplugged it and put it in a drawer. Problem. Solved.
-Alternatively, anything you're wanting to do more of should be out in the open. Put your guitar stand or chess board or whatever where you're going to see it frequently, and you'll engage with it more often. This doubles as a signal to other people, giving you an opportunity to manage their impression of you, learn more about them, and identify those with similar interests to yours.
-Make use of complementary strategies (Kirsh, "Complementary Strategies", 1995). If you're having trouble comprehending something, make a diagram, or write a list. The linked paper describes a simple pilot study which involved two groups tasked with counting coins, one which could use their hands and one which could not. The 'no hands' group was more likely to make errors and to take longer to complete the task. Granted, this was a pilot study with sample size = 5, and the difference wasn't that stark. But it's worth thinking about next time you're stuck on a problem.
-Complementary strategies can also include things you do with your body, which after all is just space you wear with you everywhere. Talk out loud to yourself if you're alone, give a mock presentation in which you summarize a position you're trying to understand, keep track of arguments and counterarguments with your fingers. I've always found the combination of explaining something out loud to an imaginary person while walking or pacing to be especially potent. Some of my best ideas come to me while I'm hiking.
-Try some of these embodied cognition hacks.
Summary and Conclusion
Space is a resource which, like all others, can be used effectively or not. When used effectively, it acts to simplify choices, simplify perception, and simplify internal computation. I've provided many examples of good space usage from all sorts of real-life domains in the hopes that you can apply some of these insights to live and work more effectively.
Further Reading
[In the original post these references contained no links. Sincere thanks to user Pablo_Stafforini for tracking them down]
Kirsh, D. (1995) The Intelligent Use of Space
Kirsh, D. (1999) Distributed Cognition, Coordination and Environment Design
Kirsh, D. (1998) Adaptive Rooms, Virtual Collaboration, and Cognitive Workflow
Kirsh, D. (1996) Adapting the Environment Instead of Oneself
Kirsh, D. (1995) Complementary Strategies: Why we use our hands when we think
X-Risk Roll Call
I'm working on a substantial research piece concerned with x-risk, and a sub-task of that involves compiling a list of important people in the field along with a brief summary of their education and relevant links. I realized that such a list might be a useful bit of meta-scholarship on its own, so I'm posting an incomplete version of it here in case anyone thinks there are people I should add. I haven't tracked down all the cv's and personal websites yet but I'd like to get the feedback ball rolling. After the LW crowd has given me any criticisms it thinks are relevant, I'll polish the list up.
The focus is on researchers in x-risk and related fields, so I'm not including, say, every machine intelligence researcher, just the ones who, as far as I can tell, show an awareness of the possible existential impact of their work. In practice this means those who are affiliated with x-risk reduction groups like the Future of Humanity Institute or MIRI, or ones who've specifically written on x-risk. No, that's not quite fair, but I needed some heuristic for narrowing down the list, and my mind is open if anyone has a better idea.
And yes, this is mostly information that's available with a little Googling (though a few people were hard to track down). But this list, when completed, will allow any interested person to quickly see the educational pathways taken by a large number of x-risk researchers. I'm compiling this information as opposed to, say, current position or research interests because the former is more relevant to the bigger project I'm working on, the latter is more likely to change, and besides Googling is easy if you're only interested in a handful of people. But if there is demand for a more thorough and comprehensive document, I could also put that together.
I've erred on the side of inclusion, which means I included people even if they were interns or associates as opposed to primary researchers. Of course I intend to finish this on my own, but if anyone just wants to help, let me know.
Who did I miss?
Associated with the Future of Humanity Institute and the Machine Intelligence Research Institute:
Eliezer Yudkowsky
Background:
extremely high mathematical talent with a strong philosophical bent.
Robin Hanson
Background:
BS physics (University of California, Irvine)
MS in physics/philosophy of science (University of Chicago)
PhD in Social Science (California Institute of Technology).
Nick Bostrom
Background:
BA in philosophy, mathematics, mathematical logic, and artificial intelligence (University of Gotenberg)
MA in philosophy, physics (University of Stockholm)
MSc computational neuroscience (King's College, London)
PhD in philosophy (London School of economics)
Luke Muehlhauser
Background:
studied psychology (University of Minnesota)
Stuart Armstrong
Background:
PhD in mathematics (Oxford)
blog (not personal)
Anders Sandberg
Background:
MS in computer science (Stockholm University)
PhD in computational neuroscience (Stockholm University)
Toby Ord
Background:
Bachelor's degrees in computer science, mathematics, and
philosophy (University of Melbourne)
PhD in philosophy (Balliol College & Christ Church, University of Oxford)
Daniel Dewey
Background:
BS in Computer Science, Philosophy (Carnegie Mellon University)
Ben Goertzel
Background:
BA in Mathematics (Simon's Rock College)
PhD in Mathematics (Temple University)
Carl Shulman
Background:
BA in philosophy (Harvard)
J.D. (New York University School of Law)
Anna Salamon
Background:
Bachelor's in Mathematics (University of California Santa Barbara)
Nick Beckstead
Background:
BA in philosophy and mathematics (University of Minnesota)
PhD philosophy (Rutgers)
Carl Frey
Background:
M.Sc. in Business & Economics
PhD in Economics (Technische Universität Berlin)
Milan Circovik
Background:
BS. in theoretical physics (university of Belgrade)
MS in Earth and Space Sciences (University of New York, Stony Brook)
PhD in physics (University of New York, Stony Brook)
Guy Kahane
Background:
Bachelors in philosophy (Oxford)
PhD in philosophy (Oxford)
Vincent Müller
Background:
studied philosophy with cognitive science, linguistics and history (Marburg, Hamburg, London, Oxford)
Erik Drexler
Background: BS in interdisciplinary science (MIT)
MS in Astro/Aerospace engineering (MIT)
PhD (MIT)
Seán Ó hÉigeartaig
Background:
B.A. Human Genetics (Trinity College, Dublin)
PhD in molecular genetics (Trinity College, Dublin)
Louie Helm
Background:
MS in Computer Science (University of Texas, Austin)
Malo Bourgon
Background:
MS in engineering (University of Guelph, Ontario)
Alex Altair
Background:
studied physics and mathematics (Maine school of science and mathematics)
Mihaly Barasz
Background:
MS in Mathematics (Eotvos Lorand University, Budapest)
Paul Christiano
Background:
Bachelors in Mathematics (MIT)
Benja Fallenstein
Background:
BSc in mathematics (University of Vienna)
working on PhD in mathematics (Bristol University, U.K.))
Joshua Fox
Background:
BA mathematics (Brandeis)
PhD (Harvard)
Anja Heinisch
Background:
MS, major in math, minor computer science (university of
Braunschweig, Germany)
Marcello Herreshoff
Background:
BA in mathematics (Stanford)
High performance in mathematics competitions
Bill Hibbard
Background:
BA in mathematics (University of Wisconsin, Madison)
MS in computer science (University of Wisconsin, Madison)
PhD in computer science (University of Wisconsin, Madison)
Patrick LaVictoire
Background:
AB in mathematics (University of Chicago)
PhD in mathematics (University of California, Berkeley)
Vladimir Nesov
Background:
MS in applied mathematics and physics (Moscow institute of physics and technology)
Steve Rayhawk
Background:
degree in mathematics (UC Santa Barbara college of creative studies)
Nisan Stiennon
Background:
BS in mathematics and physics (University of Michigan)
PhD in mathematics (Stanford)
Kaj Sotala
Background:
BA in Cognitive Science with a minor in Computer Science (University of Helsinki)
working on MSc in Computer Science, minor in Mathematics (University of
Helsinki)
James Miller
Background:
BA (Wesleyan University)
MA in economics (Yale University)
J.D (Stanford Law School)
PhD in economics (University of Chicago)
Qiaochu Yuan
Background:
B.Sc. in mathematics (MIT)
Michael Vassar
Background:
B.S (Penn State)
M.B.A (Drexel University)
Associated with the Global Catastrophic Risk Institute:
Seth Baum
Background:
BS in applied mathematics and optics (University of Rochester)
MS in electrical engineering (Northeastern University)
PhD in Geography (Pennsylvania State University)
Tony Barrett
Background:
BS in chemical engineering (University of California)
PhD in engineering and public policy (Carnegie Mellon University)
Grant Wilson
Background:
BA in environmental policy (Western Washington University)
J.D. (Lewis and Clark law school)
U. Tuncay Alparslan
Background:
BS in industrial engineering (University of Ankara, Turkey)
MS in operations research (Cornell)
PhD in operations research (Cornell)
Robert de Neufville
Background:
AB in government (Harvard)
MS in political science (University of California, Berkeley)
Mark Fusco
Background:
BA in religious studies and english literature (University of Toronto)
M.A.R in philosophical theology (Yale)
S.T.L in moral theology (Pontifical Lateran University)
Jacob Haqq-Misra
Background:
B.S. degrees in Astrophysics and Computer Science (University of Minnesota)
M.S. in Meteorology (Pennsylvania State University)
Ph.D. in Meteorology & Astrobiology (Pennsylvania State University)
Arden Rowell
Background:
B.A. in Anthropology/Archaeology (University of Washington)
J.D. (University of Chicago Law School)
Jianhua Xu
Background:
B.S. degree in Chemical Engineering and English (Dalian University
of Technology)
M.S. in Environmental Science (Peking University)
Ph.D. in Engineering and Public Policy (Carnegie Mellon University)
Kaitlin Butler
Background:
B.A. in Sociology (Vassar College )
M.A. in Climate and Society (Columbia University)
Tim Maher
Background:
B.S. in Astrophysics (University of Missouri, St. Louis)
Kelly Hostetler
Background:
B.S. in Political Science (Columbia University)
Matt Moretto
Background:
B.A. in History (Columbia University)
Associated with the Center for Applied Rationality
Julia Galef
Background:
Bachelor's in Statistics (Columbia)
Michael Smith
Background:
Master's in Mathematics (University of Oregon)
PhD in Mathematics and Science Education (University of California, San Diego)
Andrew Critch
Background:
BSc in Mathematics
PhD in Mathematics (University of California, Berkeley)
Yan Zhang
Background:
PhD in Mathematics (MIT)
Leah Libresco
Background:
B.A. in Political Science (Yale)
Dan Keys
Background:
Bachelor's in Mathematics and Statistics (Swathmore College)
Master's in Social Psychology (Cornell University)
Associated with the Skoll Global Threat Fund
Larry Brilliant
Background:
Undergraduate degree in philosophy (
MD (Wayne Medical School)
Master’s of Public Health (
Jane Bloch
Background:
B.A. in Political Science (University of Washington)
Scott Field
Background:
M.A in Political Science (University of California, Berkeley)
M.A. in International & Area Studies (University of California, Berkeley)
P.hD in Behavioral Ecology (University of Adelaide)
David Kroodsma
Background:
B.S. in Physics (Stanford)
M.S. in Earth Systems (Stanford)
Sylvia Lee
Background:
Bachelor’s in Civil Engineering (McGill University)
Master’s in Environmental Engineering (M.I.T)
Bruce Lowry
Background:
B.A. in International Relations (Pomona College)
M.A. in International Affairs (Johns Hopkins school of Advanced International
Studies)
Amy Luers
Background:
B.S. in environmental resources engineering (Humboldt University)
M.S. in environmental resources engineering (Humboldt University)
M.S. in international policy studies (Stanford)
Ph.D in environmental science (Stanford)
Annie Maxwell
Background:
B.A. in English, Political Science (University of Michigan)
M.A in public policy (University of Michigan)
Bessma Mourad
Background:
B.A. in Environmental Studies (University of California, Santa Cruz)
M.S. (energy and resources group, University of California, Berkeley)
Jennifer Olsen
Background:
Bachelor’s in Biomathematics (Rutgers)
Master’s in Public Health (George Washington University)
Certificate in Weapons of Mass Destruction (Uniformed Services University for
health sciences)
Ph.D (University of North Carolina, Chapel Hill)
Mark Smolinski
Background:
B.S. (University of Michigan, Ann Arbor)
M.D. (University of Michigan, Ann Arbor)
Master’s in Public Health (University of Arizona)
Lindsay Steele
Background:
B.A. in economics and spanish (University of California, Santa
Barbara)
Misc
Shane Legg
Background: MSc (University of Auckland)
A Viable Alternative to Typing
I'm thinking about writing a more substantive post about how humans work and how we can work better, a little like this one. As is common with these sorts of things, once I started to do research and pull on various threads, it turned out that the field was pretty deep and would require time to understand. But in the meantime, I just thought I would link to this video of someone programming using only their voice.
As I suffer with symptoms of carpal tunnel syndrome, this is of particular interest to me. Once I watched it I decided to start looking at different voice recognition software so that I could still get some work done while typing less. I'm happy to say that even the default software for speech recognition which came with windows is actually very able and accurate. I dictated almost this entire post using that software.
As far as I can tell, Dragon Naturally Speaking is the gold standard in voice recognition software. It does come with a pretty hefty price tag, but it may be worth it if you have serious repetitive stress injuries, or as a preventative measure if you're someone who spends a lot of time at their computer. And if that doesn't work, chances are good your computer has adequate software pre-installed.
Two Weeks of Meditation can Reduce Mind Wandering and Improve Mental Performance.
There are any number of reasons why the Less Wrong crowd might be interested in mindfulness meditation. Cultivating an ability to observe thoughts without being swept away in them could help in noticing when you're confused, looking into the dark, and, if you are skilled enough, actually changing your mind. I've been on a couple of retreats myself, and I value meditation because it's a useful technique with a lot of field testing that can be studied free of the religious context it generally comes packaged in. The results have been positive -- I've learned what a mess my mind really is and my metacognitive awareness has improved noticeably.
Recent research suggests that we can add improved cognitive functioning to the list (Mrazek et al., 2013).
There is no shortage of researchers and individuals interested in better thinking, and perhaps the most effective way of doing so is to "target a cognitive process underlying performance in a variety of contexts". A great example of such a process is "the ability to attend to a task without distraction", as unrelated thoughts compete with the job at hand for limited working memory. Based on this it makes sense to hypothesize that, if mindfulness training can reduce mind-wandering and distractedness, it ought to boost mental performance.
Psychologists at the University of California Santa Barbara examined this hypothesis using a test of reading comprehension and a test of working memory capacity. Forty eight subjects, all undergraduates, were given two tasks: one, a modified version of the GRE verbal section and two, a test of working memory called the operation span task. The verbal section simply had all the vocabulary questions removed, while the operation span task alternates something that must be memorized (like a letter) with something irrelevant (like an equation which must be evaluated as true or false). If compared to someone else you can hold a longer string of memorized letters in your mind while also accurately evaluating equations, then you have a better working memory.
Importantly, during these tasks a couple of different techniques were used to assess mind-wandering, including asking subjects to assess themselves after the fact and asking them semi-randomly during the task.
Then the subjects were divided into a group which attended a two-week class on nutrition and a group which attended a two-week class on mindfulness meditation. Meditation instruction was pretty straightforward:
"Each class included 10 to 20 min of mindfulness exercises requiring focused attention to some aspect of sensory experience (e.g., sensations of breathing, tastes of a piece of fruit, or sounds of an audio recording)...Classes focused on (a) sitting in an upright posture with legs crossed and gaze lowered, (b) distinguishing between naturally arising thoughts and elaborated thinking, (c) minimizing the distracting quality of past and future concerns by reframing them as mental projections occurring in the present, (d) using the breath as an anchor for attention during meditation, (e) repeatedly counting up to 21 consecutive exhalations, and (f) allowing the mind to rest naturally rather than trying to suppress the occurrence of thoughts.
Two-weeks later, the groups were tested again and it was found that:
relative to nutrition training, which did not cause changes in performance or mind wandering, the mindfulness training led to an enhancement of performance that was mediated by reduced mind wandering among participants who had been prone to mind wandering at pretesting.
I couldn't help but wonder about how much of a positive effect could be had by someone who didn't actually do the meditation. An interesting additional experiment to have done would've been explaining (b) and (c) (in the first block quote) to participants, asking them how much their minds wandered semi-randomly during a task and then after a task, and testing them again two weeks later. Is noticing the problem enough to get a partial solution, or does flexing your attention add something that you can't get any other way?
This is good news for those of us who would like to get the most out of our brains in an age before really high-octane cognitive enhancements are available.
Being Foreign and Being Sane
I've been reading Less Wrong for a while now, and have recently been casting about for suitable topics to write on. I've decided to break the ice now with an essay on what living and working abroad in Korea has taught me which carries over into studying rationality. While more personal than technical, this inaugural post contains generalizable lessons that I think will be of interest to anyone trying to improve their thinking.
You may be skeptical, so let me briefly make my case that traveling offers something to the aspiring rationalist. Many have written about the benefits of traveling, but for our purposes here is what matters:
Being abroad can make certain important concepts in rationality a part of you in ways studying can't match.
It's easy to read -- and to really believe -- that the map is not the territory, say, without it changing how you actually act. Information often gathers dust on the shelves in your frontal lobe without ever making it into the largely unconscious bits of your brain where so much of your deciding takes place.
With this in mind travel can be seen as part of the class of efforts to learn rationality without directly studying the science, instead doing something like playing Go or poker, for example. I don't know for sure, but such efforts could hold the promise of teaching us to incorporate insights into emotional attachment, statistical probabilities, strategy, maximizing utility, and the like -- things we've known for a long time -- into our instincts, deep down where they can actually change how we behave.
I say all this because what living in a foreign country has given me is not so much a software update which has remade me into a paragon of rationality, but rather a hearty appreciation for certain facts which might make my thought-improvement efforts more fruitful. No doubt many of you have already long-ago internalized all of this, and for you I won't be saying anything very profound.
Nevertheless, here is what I've learned:
1) You are vastly more complicated than you think you are.
The proposal for the Dartmouth conference of 1956, considered by some to be the birth of the field of AI research, had this to say:
An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Not to deny that considerable progress has been made in the past half century, but I think we can all agree that this thinking was just a tad bit optimistic.
I'm not an expert on AI research history, but it seems reasonable to assume that these proto-AI researchers perhaps didn't appreciate how complex humans are. You look at a triangle and you see a triangle; you reach for a coffee cup and grasp it; you start speaking a sentence and finish it with only the occasional pause. What could be simpler? We all forget our car keys sometimes, and some of us know a little bit about bizarre neurological problems like aphasia, but still. In general we function so well that it never occurs to us that the things we do might actually be difficult to implement.
The problem runs deeper than this, though, because there doesn't seem to be much in the way of techniques for elucidating this complexity from the inside. If there were, neuroscience might've been discovered a millennium ago in East Asia by Buddhist adepts. But instead our efforts at aiming the introspective flashlights on the machinery of our minds are thwarted by their presence totally outside our conscious awareness.
Well, if you ever feel like you're not fully appreciating the intricacies of your wetware, sit in a coffee shop or bus stop in a foreign country while eavesdropping on people whose effortless bantering could not be more inscrutable, and you'll have it impressed upon you. Alternatively, try to explain to someone with little-to-no English knowledge what something like "simple" or "almost all of" means. Even without a bit of neuroscience training you'll start to get a grasp on the vastness of the gears and levers that make every utterance possible.
This insight, at least for me, seems to creep into the rest of your thinking life, though in my case it's hard to tell because I've always pondered things like this. It isn't a far leap from here to see the potential value of research into topics like Friendly AI. If human language and vision are complicated, what are the chances that human value systems are simple? If you didn't manage to notice your retinal blind spot or the mechanisms by which you conjugate verbs in your native tongue, what are the chances that you aren't at least a little mistaken about your true goals and desires and how best to achieve them? Exactly. So maybe it's time to start reading those sequences, eh?
2) Don't be bewitched by words
Obviously if you go to a country where English or a different language you're already fluent in is spoken, this won't apply as much. But my experience has shown me that living in and learning a foreign language bestows several valuable insights on those intrepid enough to stick with it. Simply put, a sufficiently reflective and intelligent person could independently figure out about half of the sequence A Human's Guide to Words just by being in a foreign country and thinking about the experience.
First you'd have to go through the shocking revelation that so much of what you say is a fairly arbitrary set of language conventions, and then you'd begin to relearn how to communicate. You'd come to realize that words are mental paintbrush handles with which you guide the attention of other humans to certain clusters in thingspace, and that they are often disgusied queries with hidden connotations. This will be triply reinforced by the fact that you'd often have to resort to empiricism to get your point across - accompanying the word 'red' or 'chair' by actually point to red things or chairs. If you're spending time with natives the inverse will happen, and they will have to point to the parts of the world that words represent to communicate. You'll have a head start in replacing the symbol with the substance because you'll be playing taboo with nearly every word you know. Since you'll be doing this with low-level language, it'll require elbow grease to port this into your native tongue when discussing topics like free will. But if you can avoid slipping into cached thoughts, the training you received when you were a foreigner will likely prove useful.
Beyond this, however, is the tantalizing possibility that we may be more rational when we think in a foreign language, perhaps because it increases reliance on the slow, analytic System 2 at the expense of the rapid-fire, emotional System 1. Psychologists from the University of Chicago tested this idea using English speakers proficient in Japanese, Korean speakers proficient in English, and English speakers proficient in French (Keysar, hayakawa, & An, 2011) [NOTE: I'm aware this study has been mentioned before on Less Wrong, but I believe this is the first actual discussion of the experiment and its methodology]. In the first few experiments participants were randomly sorted into two groups, one of which was given a test in their native language and one of which was given a test in the foreign language. These tests were designed to elicit a well-known tendency for humans to differ in their risk preference depending on how the situation is framed.
Here's how it works: imagine that you turn on the news today to find out that an exotic new disease is ravaging Asia, with an expected final death toll of 600,000. The governments of the world decided that the best solution would be to design two separate drugs, and then to randomly select one reader of Less Wrong to decide between the two. Your number came up, and now you have a choice to make.
Drug A is guaranteed to save 200,000 people. Drug B has a 33% chance of saving everyone and a 66% chance of saving no one.
This is called the gain-framing, because what's emphasized is how many lives you'll save, or gain. When framed this way, people often prefer to administer Drug A. But studies find that if the same problem is loss-framed - that is, with drug A it is guaranteed that 400,000 people die while with Drug B there is a 33% chance that no one will die and a 66% that everyone will - far fewer people prefer Drug A, even though the results of using the drugs are identical.
Besides being sorted by foreign language participants were also randomly sorted by whether or not they got the gain or loss framing. Participants tested in their native language showed the predicted bias, but when tested in the foreign language, about an equal number of people preferred Drug A and Drug B.
An additional study found the same effect of foreign language on reasoning, but using a different bias. People tend to be loss averse, preferring to avoid a loss more than they prefer to gain an identical (or slightly better) amount. This means that people will often turn down an even bet which holds the possibility of gaining $12 and the possibility of losing $10, even though this bet has positive expected value. As with the other studies, Korean speakers proficient in English more often showed this tendency when reasoning in their native language than when reasoning in a foreign one, especially for larger bets.
There are a million reasons to learn a foreign language, but it'd be a very costly way to improve rationality. With that said, for anyone willing to invest the time and effort, better thinking could be the outcome. But even if you don't go to the trouble, simply trying to communicate with people who don't speak the same language as you will teach you a lot about how cognition and communication work.
3) The Zen of the Unfamiliar
Living in another culture can make you aware of so many things that you previously failed to notice at all. I remember not long after I got to Korea, I was in my kitchen and noticed that my sink was different from any of the ones I'd seen back in the States. It was a single open pit sunk into the counter, with a strange spinning mechanism where the drain usually is. After investigating for a while, I realized two things: one, the spinning mechanism was actually a multi-part contraption meant to catch food before it went down the drain (no idea why it could spin) and two, I'd just spend 100 times longer thinking about sinks than I had in the rest of my life combined.
To successfully live in a foreign country you'll have to master the art of noticing things fairly quickly. You'll start to watch how people dress, how they talk, how close they stand to each other, the relative frequency of eye contact, how they chew their food, what order people get served drinks. You'll learn to read the environment to learn where to stand in line, where to catch the bus, where and how to buy things, which door is the exit and which one the entrance, whether or not certain places are likely to be safe, etc.
You'll accomplish most of this by gathering evidence, forming hypotheses, using induction and deduction, and updating on new evidence. The things you've been reading about on Less Wrong will be put to use in finding food and shelter, the tools of rationality will be your compass in a world where you can't read what's written on signs or buildings and most people can't understand your questions. So there's a box on your wall with three buttons, two dials, a bunch of lights, and you're pretty sure it can make hot water come out of the shower? Not a word of English anywhere on it, you say? Well then you'll have to change one variable at a time and take note of the results, like any good scientist would.
Being immersed in a set of shared cultural and linguistic norms that you don't understand makes almost every aspect of your life an experiment. It's exhausting, and one of the most informative experiences I've ever had. On an emotional level, it will teach you to be more at ease with partial understanding, frustration, and confusion. With your comfort zone an ocean away, you'll either persevere and think on your feet, or you'll end up sleeping in the rain.
__
Like with learning a foreign language, there are many reasons to travel abroad and experience another culture. And of course, a plane ticket alone is not enough to make you a better thinker. But if you know what to look for and are actively seeking to grow from the experience, I can attest that being foreign for a little while is one way to become a bit more sane.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)