Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

How to write an academic paper, according to me

27 Stuart_Armstrong 15 October 2014 12:29PM

Disclaimer: this is entirely a personal viewpoint, formed by a few years of publication in a few academic fields. EDIT: Many of the comments are very worth reading as well.

Having recently finished a very rushed submission (turns out you can write a novel paper in a day and half, if you're willing to sacrifice quality and sanity), I've been thinking about how academic papers are structured - and more importantly, how they should be structured.

It seems to me that the key is to consider the audience. Or, more precisely, to consider the audiences - because different people will read you paper to different depths, and you should cater to all of them. An example of this is the "inverted pyramid" structure for many news articles - start with the salient facts, then the most important details, then fill in the other details. The idea is to ensure that a reader who stops reading at any point (which happens often) will nevertheless have got the most complete impression that it was possible to convey in the bit that they did read.

So, with that model in mind, lets consider the different levels of audience for a general academic paper (of course, some papers just can't fit into this mould, but many can):


continue reading »

What false beliefs have you held and why were you wrong?

26 Punoxysm 16 October 2014 05:58PM

What is something you used to believe, preferably something concrete with direct or implied predictions, that you now know was dead wrong. Was your belief rational given what you knew and could know back then, or was it irrational, and why?


Edit: I feel like some of these are getting a bit glib and political. Please try to explain what false assumptions or biases were underlying your beliefs - be introspective - this is LW after all.

Fixing Moral Hazards In Business Science

24 DavidLS 18 October 2014 09:10PM

I'm a LW reader, two time CFAR alumnus, and rationalist entrepreneur.

Today I want to talk about something insidious: marketing studies.

Until recently I considered studies of this nature merely unfortunate, funny even. However, my recent experiences have caused me to realize the situation is much more serious than this. Product studies are the public's most frequent interaction with science. By tolerating (or worse, expecting) shitty science in commerce, we are undermining the public's perception of science as a whole.

The good news is this appears fixable. I think we can change how startups perform their studies immediately, and use that success to progressively expand.

Product studies have three features that break the assumptions of traditional science: (1) few if any follow up studies will be performed, (2) the scientists are in a position of moral hazard, and (3) the corporation seeking the study is in a position of moral hazard (for example, the filing cabinet bias becomes more of a "filing cabinet exploit" if you have low morals and the budget to perform 20 studies).

I believe we can address points 1 and 2 directly, and overcome point 3 by appealing to greed.

Here's what I'm proposing: we create a webapp that acts as a high quality (though less flexible) alternative to a Contract Research Organization. Since it's a webapp, the cost of doing these less flexible studies will approach the cost of the raw product to be tested. For most web companies, that's $0.

If we spend the time to design the standard protocols well, it's quite plausible any studies done using this webapp will be in the top 1% in terms of scientific rigor.

With the cost low, and the quality high, such a system might become the startup equivalent of citation needed. Once we have a significant number of startups using the system, and as we add support for more experiment types, we will hopefully attract progressively larger corporations.

Is anyone interested in helping? I will personally write the webapp and pay for the security audit if we can reach quorum on the initial protocols.

Companies who have expressed interested in using such a system if we build it:

(I sent out my inquiries at 10pm yesterday, and every one of these companies got back to me by 3am. I don't believe "startups love this idea" is an overstatement.)

So the question is: how do we do this right?

Here are some initial features we should consider:

  • Data will be collected by a webapp controlled by a trusted third party, and will only be editable by study participants.
  • The results will be computed by software decided on before the data is collected.
  • Studies will be published regardless of positive or negative results.
  • Studies will have mandatory general-purpose safety questions. (web-only products likely exempt)
  • Follow up studies will be mandatory for continued use of results in advertisements.
  • All software/contracts/questions used will be open sourced (MIT) and creative commons licensed (CC BY), allowing for easier cross-product comparisons.

Any placebos used in the studies must be available for purchase as long as the results are used in advertising, allowing for trivial study replication.

Significant contributors will receive:

  • Co-authorship on the published paper for the protocol.
  • (Through the paper) an Erdos number of 2.
  • The satisfaction of knowing you personally helped restore science's good name (hopefully).

I'm hoping that if a system like this catches on, we can get an "effective startups" movement going :)

So how do we do this right?

You’re Entitled to Everyone’s Opinion

24 satt 20 September 2014 03:39PM

Over the past year, I've noticed a topic where Less Wrong might have a blind spot: public opinion. Since last September I've had (or butted into) five conversations here where someone's written something which made me think, "you wouldn't be saying that if you'd looked up surveys where people were actually asked about this". The following list includes six findings I've brought up in those LW threads. All of the findings come from surveys of public opinion in the United States, though some of the results are so obvious that polls scarcely seem necessary to establish their truth.

  1. The public's view of the harms and benefits from scientific research has consistently become more pessimistic since the National Science Foundation began its surveys in 1979. (In the wake of repeated misconduct scandals, and controversies like those over vaccination, global warming, fluoridation, animal research, stem cells, and genetic modification, people consider scientists less objective and less trustworthy.)
  2. Most adults identify as neither Republican nor Democrat. (Although the public is far from apolitical, lots of people are unhappy with how politics currently works, and also recognize that their beliefs align imperfectly with the simplistic left-right axis. This dissuades them from identifying with mainstream parties.)
  3. Adults under 30 are less likely to believe that abortion should be illegal than the middle-aged. (Younger adults tend to be more socially liberal in general than their parents' generation.)
  4. In the 1960s, those under 30 were less likely than the middle-aged to think the US made a mistake in sending troops to fight in Vietnam. (The under-30s were more likely to be students and/or highly educated, and more educated people were less likely to think sending troops to Vietnam was a mistake.)
  5. The Harris Survey asked, in November 1969, "as far as their objectives are concerned, do you sympathize with the goals of the people who are demonstrating, marching, and protesting against the war in Vietnam, or do you disagree with their goals?" Most respondents aged 50+ sympathized with the protesters' goals, whereas only 28% of under-35s did. (Despite the specific wording of the question, the younger respondents worried that the protests reflected badly on their demographic, whereas older respondents were more often glad to see their own dissent voiced.)
  6. A 2002 survey found that about 90% of adult smokers agreed with the statement, "If you had to do it over again, you would not have started smoking." (While most smokers derive enjoyment from smoking, many weight smoking's negative consequences strongly enough that they'd rather not smoke; they continue smoking because of habit or addiction.)

continue reading »

Questions on Theism

23 Aiyen 08 October 2014 09:02PM

Long time lurker, but I've barely posted anything. I'd like to ask Less Wrong for help.

Reading various articles by the Rationalist Community over the years, here, on Slate Star Codex and a few other websites, I have found that nearly all of it makes sense. Wonderful sense, in fact, the kind of sense you only really find when the author is actually thinking through the implications of what they're saying, and it's been a breath of fresh air. I generally agree, and when I don't it's clear why we're differing, typically due to a dispute in priors.

Except in theism/atheism.

In my experience, when atheists make their case, they assume a universe without miracles, i.e. a universe that looks like one would expect if there was no God. Given this assumption, atheism is obviously the rational and correct stance to take. And generally, Christian apologists make the same assumption! They assert miracles in the Bible, but do not point to any accounts of contemporary supernatural activity. And given such assumptions, the only way one can make a case for Christianity is with logical fallacies, which is exactly what most apologists do. The thing is though, there are plenty of contemporary miracle accounts.

Near death experiences. Answers to prayer that seem to violate the laws of physics. I'm comfortable with dismissing Christian claims that an event was "more than coincidence", because given how many people are praying and looking for God's hand in events, and the fact that an unanswered prayer will generally be forgotten while a seemingly-answered one will be remembered, one would expect to see "more than coincidence" in any universe with believers, whether or not there was a God. But there are a LOT of people out there claiming to have seen events that one would expect to never occur in a naturalistic universe. I even recall reading an atheist's account of his deconversion (I believe it was Luke Muehlhauser; apologies if I'm misremembering) in which he states that as a Christian, he witnessed healings he could not explain. Now, one could say that these accounts are the result of people lying, but I expect people to be rather more honest than that, and Luke is hardly going to make up evidence for the Christian God in an article promoting unbelief! One could say that "miracles" are misunderstood natural events, but there are plenty of accounts that seem pretty unlikely without Divine intervention-I've even read claims by Christians that they had seen people raised from the dead by prayer. And so I'd like to know how atheists respond to the evidence of miracles.

This isn't just idle curiosity. I am currently a Christian (or maybe an agnostic terrified of ending up on the wrong side of Pascal's Wager), and when you actually take religion seriously, it can be a HUGE drain on quality of life. I find myself being frightened of hell, feeling guilty when I do things that don't hurt anyone but are still considered sins, and feeling guilty when I try to plan out my life, wondering if I should just put my plans in God's hands. To make matters worse, I grew up in a dysfunctional, very Christian family, and my emotions seem to be convinced that being a true Christian means acting like my parents (who were terrible role models; emulating them means losing at life).

I'm aware of plenty of arguments for non-belief: Occam's Razor giving atheism as one's starting prior in the absence of strong evidence for God, the existence of many contradictory religions proving that humanity tends to generate false gods, claims in Genesis that are simply false (Man created from mud, woman from a rib, etc. have been conclusively debunked by science), commands given by God that seem horrifyingly immoral, no known reason why Christ's death would be needed for human redemption (many apologists try to explain this, but their reasoning never makes sense), no known reason why if belief in Jesus is so important why God wouldn't make himself blatantly obvious, hell seeming like an infinite injustice, the Bible claiming that any prayer prayed in faith will be answered contrasted with the real world where this isn't the case, a study I read about in which praying for the sick didn't improve results at all (and the group that was told they were being prayed for actually had worse results!), etc. All of this, plus the fact that it seems that nearly everyone who's put real effort into their epistemology doesn't believe and moreover is very confident in their nonbelief (I am reminded of Eliezer's comment that he would be less worried about a machine that destroys the universe if the Christian God exists than one that has a one in a trillion chance of destroying us) makes me wonder if there really isn't a God, and in so realizing this, I can put down burdens that have been hurting for nearly my entire life. But the argument from miracles keeps me in faith, keeps me frightened. If there is a good argument against miracles, learning it could be life changing.

Thank you very much. I do not have words to describe how much this means to me.

Polymath-style attack on the Parliamentary Model for moral uncertainty

21 danieldewey 26 September 2014 01:51PM

Thanks to ESrogsStefan_Schubert, and the Effective Altruism summit for the discussion that led to this post!

This post is to test out Polymath-style collaboration on LW. The problem we've chosen to try is formalizing and analyzing Bostrom and Ord's "Parliamentary Model" for dealing with moral uncertainty.

I'll first review the Parliamentary Model, then give some of Polymath's style suggestions, and finally suggest some directions that the conversation could take.

continue reading »

2014 Less Wrong Census/Survey - Call For Critiques/Questions

18 Yvain 11 October 2014 06:39AM

It's that time of year again. Actually, a little earlier than that time of year, but I'm pushing it ahead a little to match when Ozy and I expect to have more free time to process the results.

The first draft of the 2014 Less Wrong Census/Survey is complete (see 2013 results here) .

You can see the survey below if you promise not to try to take the survey because it's not done yet and this is just an example!

2014 Less Wrong Census/Survey Draft

I want two things from you.

First, please critique this draft (it's much the same as last year's). Tell me if any questions are unclear, misleading, offensive, confusing, or stupid. Tell me if the survey is so unbearably long that you would never possibly take it. Tell me if anything needs to be rephrased.

Second, I am willing to include any question you want in the Super Extra Bonus Questions section, as long as it is not offensive, super-long-and-involved, or really dumb. Please post any questions you want there. Please be specific - not "Ask something about taxes" but give the exact question you want me to ask as well as all answer choices.

Try not to add more than a few questions per person, unless you're sure yours are really interesting. Please also don't add any questions that aren't very easily sort-able by a computer program like SPSS unless you can commit to sorting the answers yourself.

I will probably post the survey to Main and officially open it for responses sometime early next week.

[Link] Animated Video - The Useful Idea of Truth (Part 1/3)

18 Joshua_Blaine 04 October 2014 11:05PM

I have taken this well received post by Eliezer, and remade the first third of it into a short and quickly paced youtube video here:

The goals of this post are re-introducing the lessons explored in the original (for anyone not yet familiar with them), as well as asking the question of whether this format is actually suited for the lessons LessWrong tries to teach. What are your thoughts?


Solstice 2014 - Kickstarter and Megameetup

17 Raemon 10 October 2014 05:55PM


  • We're running another Winter Solstice kickstarter - this is to fund the venue, musicians, food, drink and decorations for a big event in NYC on December 20th, as well as to record more music and print a larger run of the Solstice Book of Traditions. 
  • I'd also like to raise additional money so I can focus full time for the next couple months on helping other communities run their own version of the event, tailored to meet their particular needs while still feeling like part of a cohesive, broader movement - and giving the attendees a genuinely powerful experience. 

The Beginning

Four years ago, twenty NYC rationalists gathered in a room to celebrate the Winter Solstice. We sang songs and told stories about things that seemed very important to us. The precariousness of human life. The thousands of years of labor and curiosity that led us from a dangerous stone age to the modern world. The potential to create something even better, if humanity can get our act together and survive long enough.

One of the most important ideas we honored was the importance of facing truths, even when they are uncomfortable or make us feel silly or are outright terrifying. Over the evening, we gradually extinguished candles, acknowledging harsher and harsher elements of reality.

Until we sat in absolute darkness - aware that humanity is flawed, and alone, in an unforgivingly neutral universe. 

But also aware that we sit beside people who care deeply about truth, and about our future. Aware that across the world, people are working to give humanity a bright tomorrow, and that we have the power to help. Aware that across history, people have looked impossible situations in the face, and through ingenuity and persperation, made the impossible happen.

That seemed worth celebrating. 

The Story So Far

As it turned out, this resonated with people outside the rationality community. When we ran the event again in 2012, non-religious but non-Less Wrong attended the event and told me they found it very moving. In 2013, we pushed it much larger - I ran a kickstarter campaign to fund a big event in NYC. 

A hundred and fifty people from various communities attended. From Less Wrong in particular, we had groups from Boston, San Francisco, North Carolina, Ottawa, and Ohio among other places. The following day was one of the largest East Coast Megameetups. 

Meanwhile, in the Bay Area, several people put together an event that gathered around 80 attendees. In Boston and Vancouever and Leipzig Germany, people ran smaller events. This is shaping up to take root as a legitimate holiday, celebrating human history and our potential future.

This year, we want to do that all again. I also want to dedicate more time to helping other people run their events. Getting people to start celebrating a new holiday is a tricky feat. I've learned a lot about how to go about that and want to help others run polished events that feel connecting and inspirational.

So, what's happening, and how can you help?


  • The Big Solstice itself will be Saturday, December 20th at 7:00 PM. To fund it, we're aiming to raise $7500 on kickstarter. This is enough to fund the aforementioned venue, food, drink, live musicians, record new music, and print a larger run of the Solstice Book of Traditions. It'll also pay some expenses for the Megameetup. Please consider contributing to the kickstarter.
  • If you'd like to host your own Solstice (either a large or a private one) and would like advice, please contact me at and we'll work something out.
  • There will also be Solstices (of varying sizes) run by Less Wrong / EA folk held in the Bay Area, Seattle, Boston and Leipzig. (There will probably be a larger but non-LW-centered Solstice in Los Angeles and Boston as well).
  • In NYC, there will be a Rationality and EA Megameetup running from Friday, Dec 19th through Sunday evening.
    • Friday night and Saturday morning: Arrival, Settling
    • Saturday at 2PM - 4:30PM: Unconference (20 minute talks, workshops or discussions)
    • Saturday at 7PM: Big Solstice
    • Sunday at Noon: Unconference 2
    • Sunday at 2PM: Strategic New Years Resolution Planning
    • Sunday at 3PM: Discussion of creating private ritual for individual communities
  • If you're interested in coming to the Megameetup, please fill out this form saying how many people you're bringing, whether you're interested in giving a talk, and whether you're bringing a vehicle, so we can plan adequately. (We have lots of crash space, but not infinite bedding, so bringing sleeping bags or blankets would be helpful)

Effective Altruism?


Now, at Less Wrong we like to talk about how to spend money effectively, so I should be clear about a few things. I'm raising non-trivial money for this, but this should be coming out of people's Warm Fuzzies Budgets, not their Effective Altruism budgets. This is a big, end of the year community feel-good festival. 

That said, I do think this is an especially important form of Warm Fuzzies. I've had EA-type folk come to me and tell me the Solstice inspired them to work harder, make life changes, or that it gave them an emotional booster charge to keep going even when things were hard. I hope, eventually, to have this measurable in some fashion such that I can point to it and say "yes, this was important, and EA folk should definitely consider it important." 

But I'm not especially betting on that, and there are some failure modes where the Solstice ends up cannibalizing more resources that could have went towards direct impact. So, please consider that this may be especially valuable entertainment, that pushes culture in a direction where EA ideas can go more mainstream and gives hardcore EAs a motivational boost. But I encourage you to support it with dollars that wouldn't have gone towards direct Effective Altruism.

Upcoming CFAR events: Lower-cost bay area intro workshop; EU workshops; and others

17 AnnaSalamon 02 October 2014 12:08AM

For anyone who's interested:

CFAR is trying out an experimental, lower-cost, 1.5-day introductory workshop Oct 25-26 in the bay area.  It is meant to provide an easier point of entry into our rationality training.  If you've been thinking about coming to a CFAR workshop but have had trouble setting aside 4 days and $3900, you might consider trying this out.  (Or, if you have a friend or family member in that situaiton, you might suggest this to them.)  It's a beta test, so no guarantees as to the outcome -- but I suspect it'll be both useful, and a lot of fun.

We are also finally making it to Europe.  We'll be running two workshops in the UK this November, both of which have both space and financial aid still available.

We're also still running our standard workshops: Jan 16-19 in Berkeley, and April 23-26 in Boston, MA.  (We're experimenting, also, with using alumni "TA's" to increase the amount of 1-on-1 informal instruction while simultaneously increasing workshop size, in an effort to scale our impact.)

Finally, we're actually running a bunch of events lately for alumni of our 4-day workshops (a weekly rationality dojo; a bimonthly colloquium; a yearly alumni reunion; and various for-alumni workshops); which is perhaps less exciting if you aren't yet an alumnus, but which I'm very excited about because it suggests that we'll have a larger community of people doing serious practice, and thereby pushing the boundaries of the art of rationality.

If anyone wishes to discuss any of these events, or CFAR's strategy as a whole, I'd be glad to talk; you can book me here.


Fighting Mosquitos

15 ChristianKl 16 October 2014 11:53AM

According to Louie Helm eradicating a species of mosquitoes could be done for as little as a few million dollar.

I don't have a few million dollar lying around so I can't spend my own money to do it. On the other hand, I think that on average every German citizen would be quite willing to pay 1€ per year to rid Germany of mosquitoes that bite humans.

That means it's a problem of public action. The German government should spend 80 million Euro to rid Germany of Mosquitos. That's an order of magnitude higher than the numbers quoted by Louie Helm).

The same goes basically for every country or state with mosquitos.

How could we get a government to do this without spending too much money ourselves? The straight forward way is writing a petition. We could host a website and simultaneously post a petition to every relevant parliament on earth.

How do we get attention for the petition? Facebook. People don't like Mosquitos and should be willing to file an internet petition to get rid of them. I would believe this to spread virally. The idea seems interesting enough to get journalists to write articles about it. 

Bonus points:

After we have eradicated human biting mosquitoes from our homelands it's quite straightforward to export the technology to Africa. 

Does anyone see any issues with that plan?

Contrarian LW views and their economic implications

15 Larks 08 October 2014 11:48PM

LW readers have unusual views on many subjects. Efficient Market Hypothesis notwithstanding, many of these are probably alien to most people in finance. So it's plausible they might have implications that are not yet fully integrated into current asset prices. And if you rightfully believe something that most people do not believe, you should be able to make money off that.


Here's an example for a different group. Feminists believe that women are paid less than men for no good economic reason. If this is the case, feminists should invest in companies that hire many women, and short those which hire few women, to take advantage of the cheaper labour costs. And I can think of examples for groups like Socialists, Neoreactionaries, etc. - cases where their positive beliefs have strong implications for economic predictions. But I struggle to think of such ones for LessWrong, which is why I am asking you. Can you think of any unusual LW-type beliefs that have strong economic implications (say over the next 1-3 years)?


Wei Dai has previously commented on a similar phenomena, but I'm interested in a wider class of phenomena.


edit: formatting

Logical uncertainty reading list

14 alex_zag_al 18 October 2014 07:16PM

This was originally part of a post I wrote on logical uncertainty, but it turned out to be post-sized itself, so I'm splitting it off.

Daniel Garber's article Old Evidence and Logical Omniscience in Bayesian Confirmation Theory. Wonderful framing of the problem--explains the relevance of logical uncertainty to the Bayesian theory of confirmation of hypotheses by evidence.

Articles on using logical uncertainty for Friendly AI theory: qmaurmann's Meditations on Löb’s theorem and probabilistic logic. Squark's Overcoming the Loebian obstacle using evidence logic. And Paul Christiano, Eliezer Yudkowsky, Paul Herreshoff, and Mihaly Barasz's Definibility of Truth in Probabilistic Logic. So8res's walkthrough of that paper, and qmaurmann's notes. eli_sennesh like just made a post on this: Logics for Mind-Building Should Have Computational Meaning.

Benja's post on using logical uncertainty for updateless decision theory.

cousin_it's Notes on logical priors from the MIRI workshop. Addresses a logical-uncertainty version of Counterfactual Mugging, but in the course of that has, well, notes on logical priors that are more general.

Reasoning with Limited Resources and Assigning Probabilities to Arithmetical Statements, by Haim Gaifman. Shows that you can give up on giving logically equivalent statements equal probabilities without much sacrifice of the elegance of your theory. Also, gives a beautifully written framing of the problem.

manfred's early post, and later sequence. Amazingly readable. The proposal gives up Gaifman's elegance, but actually goes as far as assigning probabilities to mathematical statements and using them, whereas Gaifman never follows through to solve an example afaik. The post or the sequence may be the quickest path to getting your hands dirty and trying this stuff out, though I don't think the proposal will end up being the right answer.

There's some literature on modeling a function as a stochastic process, which gives you probability distributions over its values. The information in these distributions comes from calculations of a few values of the function. One application is in optimizing a difficult-to-evaluate objective function: see Efficient Global Optimization of Expensive Black-Box Functions, by Donald R. Jones, Matthias Schonlau, and William J. Welch. Another is when you're doing simulations that have free parameters, and you want to make sure you try all the relevant combinations of parameter values: see Design and Analysis of Computer Experiments by Jerome Sacks, William J. Welch, Toby J. Mitchell, and Henry P. Wynn.

Maximize Worst Case Bayes Score, by Coscott, addresses the question: "Given a consistent but incomplete theory, how should one choose a random model of that theory?"

Bayesian Networks for Logical Reasoning by Jon Williamson. Looks interesting, but I can't summarize it because I don't understand it.

And, a big one that I'm still working through: Non-Omniscience, Probabilistic Inference, and Metamathematics, by Paul Christiano. Very thorough, goes all the way from trying to define coherent belief to trying to build usable algorithms for assigning probabilities.

Dealing With Logical Omniscience: Expressiveness and Pragmatics, by Joseph Y. Halpern and Riccardo Pucella.

Reasoning About Rational, But Not Logically Omniscient Agents, by Ho Ngoc Duc. Sorry about the paywall.

And then the references from Christiano's report:

Abram Demski. Logical prior probability. In Joscha Bach, Ben Goertzel, and Matthew Ikle, editors, AGI, volume 7716 of Lecture Notes in Computer Science, pages 50-59. Springer, 2012.

Marcus Hutter, John W. Lloyd, Kee Siong Ng, and William T. B. Uther. Probabilities on sentences in an expressive logic. CoRR, abs/1209.2620, 2012.

Bas R. Steunebrink and Jurgen Schmidhuber. A family of Godel machine implementations. In Jurgen Schmidhuber, Kristinn R. Thorisson, and Moshe Looks, editors, AGI, volume 6830 of Lecture Notes in Computer Science, pages 275{280. Springer, 2011.

If you have any more links, post them!

Or if you can contribute summaries.

What math is essential to the art of rationality?

14 Capla 15 October 2014 02:44AM

I have started to put together a sort of curriculum for learning the subjects that lend themselves to rationality. It includes things like experimental methodology and cognitive psychology (obviously), along with "support disciplines" like computer science and economics. I think (though maybe I'm wrong) that mathematics is one of the most important things to understand.

Eliezer said in the simple math of everything:

It seems to me that there's a substantial advantage in knowing the drop-dead basic fundamental embarrassingly simple mathematics in as many different subjects as you can manage.  Not, necessarily, the high-falutin' complicated damn math that appears in the latest journal articles.  Not unless you plan to become a professional in the field.  But for people who can read calculus, and sometimes just plain algebra, the drop-dead basic mathematics of a field may not take that long to learn.  And it's likely to change your outlook on life more than the math-free popularizations or the highly technical math.

I want to have access to outlook-changing insights. So, what math do I need to know? What are the generally applicable mathematical principles that are most worth learning? The above quote seems to indicate at least calculus, and everyone is a fan of Bayesian statistics (which I know little about). 

Secondarily, what are some of the most important of that "drop-dead basic fundamental embarrassingly simple mathematics" from different fields? What fields are mathematically based, other than physics and evolutionary biology, and economics?

What is the most important math for an educated person to be familiar with?

As someone who took an honors calculus class in high school, liked it, and did alright in the class, but who has probably forgotten most of it by now and needs to relearn it, how should I go about learning that math?

Cryonics in Europe?

14 roland 10 October 2014 02:58PM

What are the best options for cryonics in Europe?

AFAIK the best option is still to use one of the US providers(e.g. Alcor) and arrange for transportation. There is a problem with this though, in that until you arrive in the US your body will be cooled with dry ice which will cause huge ischemic damage.


  1. How critical is the ischemic damage? If I interpret this comment by Eliezer correctly we shouldn't worry about this damage if we consider future technology.
  2. Is there a way to have adequate cooling here in Europe until you arrive at the US for final storage?

There is also KrioRus, a Russian cryonics company, they seem to offer an option of cryo transportation but I don't know how trustworthy they are.

Happiness Logging: One Year In

14 jkaufman 09 October 2014 07:24PM

I've been logging my happiness for a year now. [1] My phone notifies me at unpredictable intervals, and I respond with some tags. For example, if it pinged me now, I would enter "6 home bed computer blog". I always have a numeric tag for my current happiness, and then additional tags for where I am, what I'm doing, and who I'm with. So: what's working, what's not?

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put down '6' so I should put down '6' now".

Being honest to myself like this can also make me less happy. Normally if I'm negative about something I try not to dwell on it. I don't think about it, and soon I'm thinking about other things and not so negative. Logging that I'm unhappy makes me own up to being unhappy, which I think doesn't help. Though it's hard to know because any other sort of measurement would seem to have the same problem.

There's also a sampling issue. I don't have my phone ping me during the night, because I don't want it to wake me up. Before having a kid this worked properly: I'd plug in my phone, which turns off pings, promptly fall asleep, wake up in the morning, unplug my phone. Now, though, my sleep is generally interrupted several times a night. Time spent waiting to see if the baby falls back asleep on her own, or soothing her back to sleep if she doesn't, or lying awake at 4am because it's hard to fall back asleep when you've had 7hr and just spent an hour walking around and bouncing the baby; none of these are counted. On the whole, these experiences are much less enjoyable than my average; if the baby started sleeping through the night such that none of these were needed anymore I wouldn't see that as a loss at all. Which means my data is biased upward. I'm curious how happiness sampling studies have handled this; people with insomnia would be in a similar situation.

Another sampling issue is that I don't always notice when I get a ping. For the brief period when I was wearing a smartwatch I was consistently noticing all my pings but now I'm back to where I sometimes miss the vibration. I usually fill out these pings retroactively if it's only been a few minutes and I'm confident that I remember how I felt and what I was doing. I haven't been tagging these pings separately, but now that I think of it I'm going to add an "r" tag for retroactive responses.

Responding to pings when other people are around can also be tricky. For a while there were some people who would try and peek and see what I was writing, and I wasn't sure whether I should let them see. I ended up deciding that while having all the data eventally end up public was fine, filling it out in the moment needed to be private so I wouldn't be swayed by wanting to indicate things to the people around me.

The app I'm using isn't perfect, but it's pretty good. Entering new tags is a little annoying, and every time I back up the pings it forgets my past tags. The manual backup step also led to some missing data—all of September 2014 and some of August—because my phone died. This logging data is the only thing on my phone that isn't automatically backed up to the cloud, so when my phone died a few weeks ago I lost the last month of pings. [2] So now there's a gap in the graph.

While I'm not that confident in my numeric reports, I'm much more confident in the other tags that indicate what I'm doing at various times. If I'm on the computer I very reliably tag 'computer', etc. I haven't figured out what to do with this data yet, but it should be interesting for tracking behavior chages over time. One thing I remember doing is switching from wasting time on my computer to on my phone; let's see what that looked like:

I don't remember why the big drop in computer use at the end of February 2014 happened. I assumed at first it was having a baby, after which I spent a lot of time reading on my phone while she was curled up on me, but that wasn't until a month later. I think this may have been when I realized that I didn't hate the facebook app on my phone afterall? I'm not sure. The second drop in both phone- and computer-based timewasting, the temporary one in July 2014, was my being in England. My phone had internet but my computer usually didn't. And there was generally much more interesting stuff going on around me than my phone.

Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling.

I also posted this on my blog.

[1] First ping was 2013.10.08 06:31:41, a year ago yesterday.

[2] Well, it was more my fault than that. The phone was partly working and I did a factory reset to see if that would fix it (it didn't) and I forgot to back up pings first.

Decision theories as heuristics

14 owencb 28 September 2014 02:36PM

Main claims:

  1. A lot of discussion of decision theories is really analysing them as decision-making heuristics for boundedly rational agents.
  2. Understanding decision-making heuristics is really useful.
  3. The quality of dialogue would be improved if it was recognised when they were being discussed as heuristics.

Epistemic status: I’ve had a “something smells” reaction to a lot of discussion of decision theory. This is my attempt to crystallise out what I was unhappy with. It seems correct to me at present, but I haven’t spent too much time trying to find problems with it, and it seems quite possible that I’ve missed something important. Also possible is that this just recapitulates material in a post somewhere I’ve not read.

Existing discussion is often about heuristics

Newcomb’s problem traditionally contrasts the decisions made by Causal Decision Theory (CDT) and Evidential Decision Theory (EDT). The story goes that CDT reasons that there is no causal link between a decision made now and the contents of the boxes, and therefore two-boxes. Meanwhile EDT looks at the evidence of past participants and chooses to one-box in order to get a high probability of being rich.

I claim that both of these stories are applications of the rules as simple heuristics to the most salient features of the case. As such they are robust to variation in the fine specification of the case, so we can have a conversation about them. If we want to apply them with more sophistication then the answers do become sensitive to the exact specification of the scenario, and it’s not obvious that either has to give the same answer the simple version produces.

First consider CDT. It has a high belief that there is no causal link between choosing to one- or two- box and Omega’s previous decision. But in practice, how high is this belief? If it doesn’t understand exactly how Omega works, it might reserve some probability to the possibility of a causal link, and this could be enough to tip the decision towards one-boxing.

On the other hand EDT should properly be able to consider many sources of evidence besides the ones about past successes of Omega’s predictions. In particular it could assess all of the evidence that normally leads us to believe that there is no backwards-causation in our universe. According to how strong this evidence is, and how strong the evidence that Omega’s decision really is locked in, it could conceivably two-box.

Note that I’m not asking here for a more careful specification of the set-up. Rather I’m claiming that a more careful specification could matter -- and so to the extent that people are happy to discuss it without providing lots more details they’re discussing the virtues of CDT and EDT as heuristics for decision-making rather than as an ultimate normative matter (even if they’re not thinking of their discussion that way).

Similarly So8res had a recent post which discussed Newcomblike problems faced by people, and they are very clear examples when the decision theories are viewed as heuristics. If you allow the decision-maker to think carefully through all the unconscious signals sent by her decisions, it’s less clear that there’s anything Newcomblike.

Understanding decision-making heuristics is valuable

In claiming that a lot of the discussion is about heuristics, I’m not making an attack. We are all boundedly rational agents, and this will very likely be true of any artificial intelligence as well. So our decisions must perforce be made by heuristics. While it can be useful to study what an idealised method would look like (in order to work out how to approximate it), it’s certainly useful to study heuristics and determine what their relative strengths and weaknesses are.

In some cases we have good enough understanding of everything in the scenario that our heuristics can essentially reproduce the idealised method. When the scenario contains other agents which are as complicated as ourselves or more so, it seems like this has to fail.

We should acknowledge when we’re talking about heuristics

By separating discussion of the decision-theories-as-heuristics from decision-theories-as-idealised-decision-processes, we should improve the quality of dialogue in both parts. The discussion of the ideal would be less confused by examples of applications of the heuristics. The discussion of the heuristics could become more relevant by allowing people to talk about features which are only relevant for heuristics.

For example, it is relevant if one decision theory tends to need a more detailed description of the scenario to produce good answers. It’s relevant if one is less computationally tractable. And we can start to formulate and discuss hypotheses such as “CDT is the best decision-procedure when the scenario doesn’t involve other agents, or only other agents so simple that we can model them well. Updateless Decision Theory is the best decision-procedure when the scenario involves other agents too complex to model well”.

In addition, I suspect that it would help to reduce disagreements about the subject. Many disagreements in many domains are caused by people talking past each other. Discussion of heuristics without labelling it as such seems like it could generate lots of misunderstandings.

CEV: coherence versus extrapolation

14 Stuart_Armstrong 22 September 2014 11:24AM

It's just struck me that there might be a tension between the coherence (C) and the extrapolated (E) part of CEV. One reason that CEV might work is that the mindspace of humanity isn't that large - humans are pretty close to each other, in comparison to the space of possible minds. But this is far more true in every day decisions than in large scale ones.

Take a fundamentalist Christian, a total utilitarian, a strong Marxist, an extreme libertarian, and a couple more stereotypes that fit your fancy. What can their ideology tell us about their everyday activities? Well, very little. Those people could be rude, polite, arrogant, compassionate, etc... and their ideology is a very weak indication of that. Different ideologies and moral systems seem to mandate almost identical everyday and personal interactions (this is in itself very interesting, and causes me to see many systems of moralities as formal justifications of what people/society find "moral" anyway).

But now let's more to a more distant - "far" - level. How will these people vote in elections? Will they donate to charity, and if so, which ones? If they were given power (via wealth or position in some political or other organisation), how are they likely to use that power? Now their ideology is much more informative. Though it's not fully determinative, we would start to question the label if their actions at this level seemed out of synch. A Marxist that donated to a Conservative party, for instance, would give us pause, and we'd want to understand the apparent contradiction.

Let's move up yet another level. How would they design or change the universe if they had complete power? What is their ideal plan for the long term? At this level, we're entirely in far mode, and we would expect that their vastly divergent ideologies would be the most informative piece of information about their moral preferences. Details about their character and personalities, which loomed so large at the everyday level, will now be of far lesser relevance. This is because their large scale ideals are not tempered by reality and by human interactions, but exist in a pristine state in their minds, changing little if at all. And in almost every case, the world they imagine as their paradise will be literal hell for the others (and quite possibly for themselves).

To summarise: the human mindspace is much narrower in near mode than in far mode.

And what about CEV? Well, CEV is what we would be "if we knew more, thought faster, were more the people we wished we were, had grown up farther together". The "were more the people we wished we were" is going to be dominated by the highly divergent far mode thinking. The "had grown up farther together" clause attempts to mesh these divergences, but that simply obscures the difficulty involved. The more we extrapolate, the harder coherence becomes.

It strikes me that there is a strong order-of-operations issue here. I'm not a fan of CEV, but it seems it would be much better to construct, first, the coherent volition of humanity, and only then to extrapolate it.

An introduction to Newcomblike problems

14 So8res 20 September 2014 06:40PM

This is crossposted from my new blog, following up on my previous post. It introduces the original "Newcomb's problem" and discusses the motivation behind twoboxing and the reasons why CDT fails. content is probably review for most LessWrongers, later posts in the sequence may be of more interest.

Last time I introduced causal decision theory (CDT) and showed how it has unsatisfactory behavior on "Newcomblike problems". Today, we'll explore Newcomblike problems in a bit more depth, starting with William Newcomb's original problem.

The Problem

Once upon a time there was a strange alien named Ω who is very very good at predicting humans. There is this one game that Ω likes to play with humans, and Ω has played it thousands of times without ever making a mistake. The game works as follows:

First, Ω observes the human for a while and collects lots of information about the human. Then, Ω makes a decision based on how Ω predicts the human will react in the upcoming game. Finally, Ω presents the human with two boxes.

The first box is blue, transparent, and contains $1000. The second box is red and opaque.

You may take either the red box alone, or both boxes,

Ω informs the human. (These are magical boxes where if you decide to take only the red one then the blue one, and the $1000 within, will disappear.)

If I predicted that you would take only the red box, then I filled it with $1,000,000. Otherwise, I left it empty. I have already made my choice,

Ω concludes, before turning around and walking away.

You may take either only the red box, or both boxes. (If you try something clever, like taking the red box while a friend takes a blue box, then the red box is filled with hornets. Lots and lots of hornets.) What do you do?

continue reading »

Assessing oneself

13 polymer 26 September 2014 06:03PM

I'm sorry if this is the wrong place for this, but I'm kind of trying to find a turning point in my life.

I've been told repeatedly that I have a talent for math, or science (by qualified people). And I seem to be intelligent enough to understand large parts of math and physics. But I don't know if I'm intelligent enough to make a meaningful contribution to math or physics.

Lately I've been particularly sad, since my score on the quantitative general GRE, and potentially, the Math subject test aren't "outstanding". They are certainly okay (official 78 percentile, unofficial 68 percentile respectively). But that is "barely qualified" for a top 50 math program.

Given that I think these scores are likely correlated with my IQ (they seem to roughly predict my GPA so far 3.5, math and physics major), I worry that I'm getting clues that maybe I should "give up".

This would be painful for me to accept if true, I care very deeply about inference and nature. It would be nice if I could have a job in this, but the standard career path seems to be telling me "maybe?"

When do you throw in the towel? How do you measure your own intelligence? I've already "given up" once before and tried programming, but the average actual problem was too easy relative to the intellectual work (memorizing technical fluuf). And other engineering disciplines seem similar. Is there a compromise somewhere, or do I just need to grow up?


For what it's worth, the classes I've taken include Real and Complex Analysis, Algebra, Differential geometry, Quantum Mechanics, Mechanics, and others. And most of my GPA is burned by Algebra and 3rd term Quantum specifically. But part of my worry, is that somebody who is going to do well, would never get burned by courses like this. But I'm not really sure. It seems like one should fail sometimes, but rarely standard assessments.


Thank you all for your thoughts, you are a very warm community. I'll give more specific thoughts tomorrow. For what it's worth, I'll be 24 next month.


Double Edit:

Thank you all for your thoughts and suggestions. I think I will tentatively work towards an applied Mathematics PHD. It isn't so important that the school you get into is in the top ten, and there will be lots of opportunities to work on a variety of interesting important problems (throughout my life). Plus, after the PHD, transitioning into industry can be reasonably easy. It seems to make a fair bit of sense given my interests, background, and ability.

[Link] Forty Days

11 GLaDOS 29 September 2014 12:29PM

A post from Gregory Cochran's and Henry Harpending's excellent blog West Hunter.

One of the many interesting aspects of how the US dealt with the AIDS epidemic is what we didn’t do – in particular, quarantine.  Probably you need a decent test before quarantine is practical, but we had ELISA by 1985 and a better Western Blot test by 1987.

There was popular support for a quarantine.

But the public health experts generally opined that such a quarantine would not work.

Of course, they were wrong.  Cuba institute a rigorous quarantine.  They mandated antiviral treatment for pregnant women and mandated C-sections for those that were HIV-positive.  People positive for any venereal disease were tested for HIV as well.  HIV-infected people must provide the names of all sexual partners for the past sic months.

Compulsory quarantining was relaxed in 1994, but all those testing positive have to go to a sanatorium for 8 weeks of thorough education on the disease.  People who leave after 8 weeks and engage in unsafe sex undergo permanent quarantine.

Cuba did pretty well:  the per-capita death toll was 35 times lower than in the US.

Cuba had some advantages:  the epidemic hit them at least five years later than it did the US (first observed Cuban case in 1986, first noticed cases in the US in 1981).  That meant they were readier when they encountered the virus.  You’d think that because of the epidemic’s late start in Cuba, there would have been a shorter interval without the effective protease inhibitors (which arrived in 1995 in the US) – but they don’t seem to have arrived in Cuba until 2001, so the interval was about the same.

If we had adopted the same strategy as Cuba, it would not have been as effective, largely because of that time lag.  However, it surely would have prevented at least half of the ~600,000 AIDS deaths in the US.  Probably well over half.

I still see people stating that of course quarantine would not have worked: fairly often from dimwitted people with a Masters in Public Health.

My favorite comment was from a libertarian friend who said that although quarantine  certainly would have worked, better to sacrifice a few hundred thousand than validate the idea that the Feds can sometimes tell you what to do with good effect.

The commenter Ron Pavellas adds:

I was working as the CEO of a large hospital in California during the 1980s (I have MPH as my degree, by the way). I was outraged when the Public Health officials decided to not treat the HI-Virus as an STD for the purposes of case-finding, as is routinely and effectively done with syphilis, gonorrhea, etc. In other words, they decided to NOT perform classic epidemiology, thus sullying the whole field of Public Health. It was not politically correct to potentially ‘out’ individuals engaging in the kind of behavior which spreads the disease. No one has recently been concerned with the potential ‘outing’ of those who contract other STDs, due in large part to the confidential methods used and maintained over many decades. (Remember the Wassermann Test that was required before you got married?) As is pointed out in this article, lives were needlessly lost and untold suffering needlessly ensued.

The Wasserman Test.

Simulation argument meets decision theory

11 pallas 24 September 2014 10:47AM

Person X stands in front of a sophisticated computer playing the decision game Y which allows for the following options: either press the button "sim" or "not sim". If she presses "sim", the computer will simulate X*_1, X*_2, ..., X*_1000 which are a thousand identical copies of X. All of them will face the game Y* which - from the standpoint of each X* - is indistinguishable from Y. But the simulated computers in the games Y* don't run simulations. Additionally, we know that if X presses "sim" she receives a utility of 1, but "not sim" would only lead to 0.9. If X*_i (for i=1,2,3..1000)  presses "sim" she receives 0.2, with "not sim" 0.1. For each agent it is true that she does not gain anything from the utility of another agent despite the fact she and the other agents are identical! Since all the agents are identical egoists facing the apparently same situation, all of them will take the same action.  

Now the game starts. We face a computer and know all the above. We don't know whether we are X or any of the X*'s, should we now press "sim" or "not sim"?


EDIT: It seems to me that "identical" agents with "independent" utility functions were a clumsy set up for the above question, especially since one can interpret it as a contradiction. Hence, it might be better to switch to identical egoists whereas each agent only cares about her receiving money (linear monetary value function). If X presses "sim" she will be given 10$ (else 9$) in the end of the game; each X* who presses "sim" receives 2$ (else 1$), respectively. Each agent in the game wants to maximize the expected monetary value they themselves will hold in their own hand after the game. So, intrinsically, they don't care how much money the other copies make. 
To spice things up: What if the simulation will only happen a year later? Are we then able to "choose" which year it is?

Superintelligence 5: Forms of Superintelligence

10 KatjaGrace 14 October 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the fifth section in the reading guideForms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 3 (p52-61)


  1. A speed superintelligence could do what a human does, but faster. This would make the outside world seem very slow to it. It might cope with this partially by being very tiny, or virtual. (p53)
  2. A collective superintelligence is composed of smaller intellects, interacting in some way. It is especially good at tasks that can be broken into parts and completed in parallel. It can be improved by adding more smaller intellects, or by organizing them better. (p54)
  3. A quality superintelligence can carry out intellectual tasks that humans just can't in practice, without necessarily being better or faster at the things humans can do. This can be understood by analogy with the difference between other animals and humans, or the difference between humans with and without certain cognitive capabilities. (p56-7)
  4. These different kinds of superintelligence are especially good at different kinds of tasks. We might say they have different 'direct reach'. Ultimately they could all lead to one another, so can indirectly carry out the same tasks. We might say their 'indirect reach' is the same. (p58-9)
  5. We don't know how smart it is possible for a biological or a synthetic intelligence to be. Nonetheless we can be confident that synthetic entities can be much more intelligent than biological entities
    1. Digital intelligences would have better hardware: they would be made of components ten million times faster than neurons; the components could communicate about two million times faster than neurons can; they could use many more components while our brains are constrained to our skulls; it looks like better memory should be feasible; and they could be built to be more reliable, long-lasting, flexible, and well suited to their environment.
    2. Digital intelligences would have better software: they could be cheaply and non-destructively 'edited'; they could be duplicated arbitrarily; they could have well aligned goals as a result of this duplication; they could share memories (at least for some forms of AI); and they could have powerful dedicated software (like our vision system) for domains where we have to rely on slow general reasoning.


  1. This chapter is about different kinds of superintelligent entities that could exist. I like to think about the closely related question, 'what kinds of better can intelligence be?' You can be a better baker if you can bake a cake faster, or bake more cakes, or bake better cakes. Similarly, a system can become more intelligent if it can do the same intelligent things faster, or if it does things that are qualitatively more intelligent. (Collective intelligence seems somewhat different, in that it appears to be a means to be faster or able to do better things, though it may have benefits in dimensions I'm not thinking of.) I think the chapter is getting at different ways intelligence can be better rather than 'forms' in general, which might vary on many other dimensions (e.g. emulation vs AI, goal directed vs. reflexive, nice vs. nasty).
  2. Some of the hardware and software advantages mentioned would be pretty transformative on their own. If you haven't before, consider taking a moment to think about what the world would be like if people could be cheaply and perfectly replicated, with their skills intact. Or if people could live arbitrarily long by replacing worn components. 
  3. The main differences between increasing intelligence of a system via speed and via collectiveness seem to be: (1) the 'collective' route requires that you can break up the task into parallelizable subtasks, (2) it generally has larger costs from communication between those subparts, and (3) it can't produce a single unit as fast as a comparable 'speed-based' system. This suggests that anything a collective intelligence can do, a comparable speed intelligence can do at least as well. One counterexample to this I can think of is that often groups include people with a diversity of knowledge and approaches, and so the group can do a lot more productive thinking than a single person could. It seems wrong to count this as a virtue of collective intelligence in general however, since you could also have a single fast system with varied approaches at different times.
  4. For each task, we can think of curves for how performance increases as we increase intelligence in these different ways. For instance, take the task of finding a fact on the internet quickly. It seems to me that a person who ran at 10x speed would get the figure 10x faster. Ten times as many people working in parallel would do it only a bit faster than one, depending on the variance of their individual performance, and whether they found some clever way to complement each other. It's not obvious how to multiply qualitative intelligence by a particular factor, especially as there are different ways to improve the quality of a system. It also seems non-obvious to me how search speed would scale with a particular measure such as IQ. 
  5. How much more intelligent do human systems get as we add more humans? I can't find much of an answer, but people have investigated the effect of things like team sizecity size, and scientific collaboration on various measures of productivity.
  6. The things we might think of as collective intelligences - e.g. companies, governments, academic fields - seem notable to me for being slow-moving, relative to their components. If someone were to steal some chewing gum from Target, Target can respond in the sense that an employee can try to stop them. And this is no slower than an individual human acting to stop their chewing gum from being taken. However it also doesn't involve any extra problem-solving from the organization - to the extent that the organization's intelligence goes into the issue, it has to have already done the thinking ahead of time. Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Produce improved measures of (substrate-independent) general intelligence. Build on the ideas of Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc. Differentiate intelligence quality from speed.
  2. List some feasible but non-realized cognitive talents for humans, and explore what could be achieved if they were given to some humans.
  3. List and examine some types of problems better solved by a speed superintelligence than by a collective superintelligence, and vice versa. Also, what are the returns on “more brains applied to the problem” (collective intelligence) for various problems? If there were merely a huge number of human-level agents added to the economy, how much would it speed up economic growth, technological progress, or other relevant metrics? If there were a large number of researchers added to the field of AI, how would it change progress?
  4. How does intelligence quality improve performance on economically relevant tasks?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77)The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.

Baysian conundrum

10 Jan_Rzymkowski 13 October 2014 12:39AM

For some time I've been pondering on a certain scenario, which I'll describe shortly. I hope you may help me find a satisfactory answer or at very least be as perplexed by this probabilistic question as me. Feel free to assign any reasonable a priori probabilities as you like. Here's the problem:

It's cold cold winter. Radiators are hardly working, but it's not why you're sitting so anxiously in your chair. The real reason is that tomorrow is your assigned upload (and damn, it's just one in million chance you're not gonna get it) and you just can't wait to leave your corporality behind. "Oh, I'm so sick of having a body, especially now. I'm freezing!" you think to yourself, "I wish I were already uploaded and could just pop myself off to a tropical island."

And now it strikes you. It's a weird solution, but it feels so appealing. You make a solemn oath (you'd say one in million chance you'd break it), that soon after upload you will simulate this exact moment thousand times simultaneously and when the clock strikes 11 AM, you're gonna be transposed to a Hawaiian beach, with a fancy drink in your hand.

It's 10:59 on a clock. What's the probability that you'd be in a tropical paradise in one minute?

And to make things more paradoxical: What would be said probability, if you wouldn't have made such an oath - just seconds ago?

What's the right way to think about how much to give to charity?

10 irrational 24 September 2014 09:42PM

I'd like to hear from people about a process they use to decide how much to give to charity. Personally, I have very high income, and while we donate significant money in absolute terms, in relative terms the amount is <1% of our post-tax income. It seems to me that it's too little, but I have no moral intuition as to what the right amount is.

I have a good intuition on how to allocate the money, so that's not a problem.

Background: I have a wife and two kids, one with significant health issues (i.e. medical bills - possibly for life), most money we spend goes to private school tuition x 2, the above mentioned medical bills, mortgage, and miscellaneous life expenses. And we max out retirement savings.

If you have some sort of quantitative system where you figure out how much to spend on charity, please share. If you just use vague feelings, and you think there can be no reasonable quantitative system, please tell me that as well.

Update: as suggested in the comments, I'll make it more explicit: please also share how you determine how much to give.

Improving the World

9 Viliam_Bur 10 October 2014 12:24PM

What are we doing to make this world a better (epistemically or instrumentally) place?

Some answers to this question are already written in Bragging Threads and other places, but I think they deserve a special emphasis. I think that many smart people are focused on improving themselves, which is a good thing in a long run, but sometimes the world needs some help right now. (Also, there is the failure mode of learning a lot about something, and then actually not applying that knowledge in real life.) Becoming stronger so you can create more good in the future is about the good you will create in the future; but what good are you creating right now?



Top-level comments are the things you are doing right now (not merely planning to do once) to improve the world... or a part of the world... or your neighborhood... or simply any small part of the world other than only yourself.

Meta debates go under the "META" comment.

Superintelligence Reading Group 3: AI and Uploads

9 KatjaGrace 30 September 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the third section in the reading guide, AI & Whole Brain Emulation. This is about two possible routes to the development of superintelligence: the route of developing intelligent algorithms by hand, and the route of replicating a human brain in great detail.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading“Artificial intelligence” and “Whole brain emulation” from Chapter 2 (p22-36)



  1. Superintelligence is defined as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'
  2. There are several plausible routes to the arrival of a superintelligence: artificial intelligence, whole brain emulation, biological cognition, brain-computer interfaces, and networks and organizations. 
  3. Multiple possible paths to superintelligence makes it more likely that we will get there somehow. 
  1. A human-level artificial intelligence would probably have learning, uncertainty, and concept formation as central features.
  2. Evolution produced human-level intelligence. This means it is possible, but it is unclear how much it says about the effort required.
  3. Humans could perhaps develop human-level artificial intelligence by just replicating a similar evolutionary process virtually. This appears at after a quick calculation to be too expensive to be feasible for a century, however it might be made more efficient.
  4. Human-level AI might be developed by copying the human brain to various degrees. If the copying is very close, the resulting agent would be a 'whole brain emulation', which we'll discuss shortly. If the copying is only of a few key insights about brains, the resulting AI might be very unlike humans.
  5. AI might iteratively improve itself from a meagre beginning. We'll examine this idea later. Some definitions for discussing this:
    1. 'Seed AI': a modest AI which can bootstrap into an impressive AI by improving its own architecture.
    2. 'Recursive self-improvement': the envisaged process of AI (perhaps a seed AI) iteratively improving itself.
    3. 'Intelligence explosion': a hypothesized event in which an AI rapidly improves from 'relatively modest' to superhuman level (usually imagined to be as a result of recursive self-improvement).
  6. The possibility of an intelligence explosion suggests we might have modest AI, then suddenly and surprisingly have super-human AI.
  7. An AI mind might generally be very different from a human mind. 

Whole brain emulation

  1. Whole brain emulation (WBE or 'uploading') involves scanning a human brain in a lot of detail, then making a computer model of the relevant structures in the brain.
  2. Three steps are needed for uploading: sufficiently detailed scanning, ability to process the scans into a model of the brain, and enough hardware to run the model. These correspond to three required technologies: scanning, translation (or interpreting images into models), and simulation (or hardware). These technologies appear attainable through incremental progress, by very roughly mid-century.
  3. This process might produce something much like the original person, in terms of mental characteristics. However the copies could also have lower fidelity. For instance, they might be humanlike instead of copies of specific humans, or they may only be humanlike in being able to do some tasks humans do, while being alien in other regards.


  1. What routes to human-level AI do people think are most likely?
    Bostrom and Müller's survey asked participants to compare various methods for producing synthetic and biologically inspired AI. They asked, 'in your opinion, what are the research approaches that might contribute the most to the development of such HLMI?” Selection was from a list, more than one selection possible. They report that the responses were very similar for the different groups surveyed, except that whole brain emulation got 0% in the TOP100 group (100 most cited authors in AI) but 46% in the AGI group (participants at Artificial General Intelligence conferences). Note that they are only asking about synthetic AI and brain emulations, not the other paths to superintelligence we will discuss next week.
  2. How different might AI minds be?
    Omohundro suggests advanced AIs will tend to have important instrumental goals in common, such as the desire to accumulate resources and the desire to not be killed. 
  3. Anthropic reasoning 
    ‘We must avoid the error of inferring, from the fact that intelligent life evolved on Earth, that the evolutionary processes involved had a reasonably high prior probability of producing intelligence’ (p27) 

    Whether such inferences are valid is a topic of contention. For a book-length overview of the question, see Bostrom’s Anthropic Bias. I’ve written shorter (Ch 2) and even shorter summaries, which links to other relevant material. The Doomsday Argument and Sleeping Beauty Problem are closely related.

  4. More detail on the brain emulation scheme
    Whole Brain Emulation: A Roadmap is an extensive source on this, written in 2008. If that's a bit too much detail, Anders Sandberg (an author of the Roadmap) summarises in an entertaining (and much shorter) talk. More recently, Anders tried to predict when whole brain emulation would be feasible with a statistical model. Randal Koene and Ken Hayworth both recently spoke to Luke Muehlhauser about the Roadmap and what research projects would help with brain emulation now.
  5. Levels of detail
    As you may predict, the feasibility of brain emulation is not universally agreed upon. One contentious point is the degree of detail needed to emulate a human brain. For instance, you might just need the connections between neurons and some basic neuron models, or you might need to model the states of different membranes, or the concentrations of neurotransmitters. The Whole Brain Emulation Roadmap lists some possible levels of detail in figure 2 (the yellow ones were considered most plausible). Physicist Richard Jones argues that simulation of the molecular level would be needed, and that the project is infeasible.

  6. Other problems with whole brain emulation
    Sandberg considers many potential impediments here.

  7. Order matters for brain emulation technologies (scanning, hardware, and modeling)
    Bostrom points out that this order matters for how much warning we receive that brain emulations are about to arrive (p35). Order might also matter a lot to the social implications of brain emulations. Robin Hanson discusses this briefly here, and in this talk (starting at 30:50) and this paper discusses the issue.

  8. What would happen after brain emulations were developed?
    We will look more at this in Chapter 11 (weeks 17-19) as well as perhaps earlier, including what a brain emulation society might look like, how brain emulations might lead to superintelligence, and whether any of this is good.

  9. Scanning (p30-36)
    ‘With a scanning tunneling microscope it is possible to ‘see’ individual atoms, which is a far higher resolution than needed...microscopy technology would need not just sufficient resolution but also sufficient throughput.’

    Here are some atoms, neurons, and neuronal activity in a living larval zebrafish, and videos of various neural events.

    Array tomography of mouse somatosensory cortex from Smithlab.

    A molecule made from eight cesium and eight
    iodine atoms (from here).
  10. Efforts to map connections between neurons
    Here is a 5m video about recent efforts, with many nice pictures. If you enjoy coloring in, you can take part in a gamified project to help map the brain's neural connections! Or you can just look at the pictures they made.

  11. The C. elegans connectome (p34-35)
    As Bostrom mentions, we already know how all of C. elegans neurons are connected. Here's a picture of it (via Sebastian Seung):

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some taken from Luke Muehlhauser's list:

  1. Produce a better - or merely somewhat independent - estimate of how much computing power it would take to rerun evolution artificially. (p25-6)
  2. How powerful is evolution for finding things like human-level intelligence? (You'll probably need a better metric than 'power'). What are its strengths and weaknesses compared to human researchers?
  3. Conduct a more thorough investigation into the approaches to AI that are likely to lead to human-level intelligence, for instance by interviewing AI researchers in more depth about their opinions on the question.
  4. Measure relevant progress in neuroscience, so that trends can be extrapolated to neuroscience-inspired AI. Finding good metrics seems to be hard here.
  5. e.g. How is microscopy progressing? It’s harder to get a relevant measure than you might think, because (as noted p31-33) high enough resolution is already feasible, yet throughput is low and there are other complications. 
  6. Randal Koene suggests a number of technical research projects that would forward whole brain emulation (fifth question).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about other paths to the development of superintelligence: biological cognition, brain-computer interfaces, and organizations. To prepare, read Biological Cognition and the rest of Chapter 2The discussion will go live at 6pm Pacific time next Monday 6 October. Sign up to be notified here.

Superintelligence Reading Group 2: Forecasting AI

9 KatjaGrace 23 September 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the second section in the reading guide, Forecasting AI. This is about predictions of AI, and what we should make of them.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

ReadingOpinions about the future of machine intelligence, from Chapter 1 (p18-21) and Muehlhauser, When Will AI be Created?


Opinions about the future of machine intelligence, from Chapter 1 (p18-21)

  1. AI researchers hold a variety of views on when human-level AI will arrive, and what it will be like.
  2. A recent set of surveys of AI researchers produced the following median dates: 
    • for human-level AI with 10% probability: 2022
    • for human-level AI with 50% probability: 2040
    • for human-level AI with 90% probability: 2075
  3. Surveyed AI researchers in aggregate gave 10% probability to 'superintelligence' within two years of human level AI, and 75% to 'superintelligence' within 30 years.
  4. When asked about the long-term impacts of human level AI, surveyed AI researchers gave the responses in the figure below (these are 'renormalized median' responses, 'TOP 100' is one of the surveyed groups, 'Combined' is all of them'). 
  5. There are various reasons to expect such opinion polls and public statements to be fairly inaccurate.
  6. Nonetheless, such opinions suggest that the prospect of human-level AI is worthy of attention.
  1. Predicting when human-level AI will arrive is hard.
  2. The estimates of informed people can vary between a small number of decades and a thousand years.
  3. Different time scales have different policy implications.
  4. Several surveys of AI experts exist, but Muehlhauser suspects sampling bias (e.g. optimistic views being sampled more often) makes such surveys of little use.
  5. Predicting human-level AI development is the kind of task that experts are characteristically bad at, according to extensive research on what makes people better at predicting things.
  6. People try to predict human-level AI by extrapolating hardware trends. This probably won't work, as AI requires software as well as hardware, and software appears to be a substantial bottleneck.
  7. We might try to extrapolate software progress, but software often progresses less smoothly, and is also hard to design good metrics for.
  8. A number of plausible events might substantially accelerate or slow progress toward human-level AI, such as an end to Moore's Law, depletion of low-hanging fruit, societal collapse, or a change in incentives for development.
  9. The appropriate response to this situation is uncertainty: you should neither be confident that human-level AI will take less than 30 years, nor that it will take more than a hundred years.
  10. We can still hope to do better: there are known ways to improve predictive accuracy, such as making quantitative predictions, looking for concrete 'signposts', looking at aggregated predictions, and decomposing complex phenomena into simpler ones.
  1. More (similar) surveys on when human-level AI will be developed
    Bostrom discusses some recent polls in detail, and mentions that others are fairly consistent. Below are the surveys I could find. Several of them give dates when median respondents believe there is a 10%, 50% or 90% chance of AI, which I have recorded as '10% year' etc. If their findings were in another form, those are in the last column. Note that some of these surveys are fairly informal, and many participants are not AI experts, I'd guess especially in the Bainbridge, AI@50 and Klein ones. 'Kruel' is the set of interviews from which Nils Nilson is quoted on p19. The interviews cover a wider range of topics, and are indexed here.

       10% year  50% year  90% year  Other predictions
    Michie 1972 
    (paper download)
          Fairly even spread between 20, 50 and >50 years
    Bainbridge 2005        Median prediction 2085
    AI@50 poll 
          82% predict more than 50 years (>2056) or never
    Baum et al
     2020      2040  2075  
    Klein 2011
        median 2030-2050
    FHI 2011  2028 2050   2150  
    Kruel 2011- (interviews, summary)  2025  2035  2070  
    FHI: AGI 2014 2022  2040  2065  
    FHI: TOP100 2014 2022   2040  2075  
    FHI:EETN 2014 2020  2050  2093  
    FHI:PT-AI 2014 2023  2048  2080  
    Hanson ongoing       Most say have come 10% or less of the way to human level
  2. Predictions in public statements
    Polls are one source of predictions on AI. Another source is public statements. That is, things people choose to say publicly. MIRI arranged for the collection of these public statements, which you can now download and play with (the original and info about it, my edited version and explanation for changes). The figure below shows the cumulative fraction of public statements claiming that human-level AI will be more likely than not by a particular year. Or at least claiming something that can be broadly interpreted as that. It only includes recorded statements made since 2000. There are various warnings and details in interpreting this, but I don't think they make a big difference, so are probably not worth considering unless you are especially interested. Note that the authors of these statements are a mixture of mostly AI researchers (including disproportionately many working on human-level AI) a few futurists, and a few other people.

    (LH axis = fraction of people predicting human-level AI by that date) 

    Cumulative distribution of predicted date of AI

    As you can see, the median date (when the graph hits the 0.5 mark) for human-level AI here is much like that in the survey data: 2040 or so.

    I would generally expect predictions in public statements to be relatively early, because people just don't tend to bother writing books about how exciting things are not going to happen for a while, unless their prediction is fascinatingly late. I checked this more thoroughly, by comparing the outcomes of surveys to the statements made by people in similar groups to those surveyed (e.g. if the survey was of AI researchers, I looked at statements made by AI researchers). In my (very cursory) assessment (detailed at the end of this page) there is a bit of a difference: predictions from surveys are 0-23 years later than those from public statements.
  3. What kinds of things are people good at predicting?
    Armstrong and Sotala (p11) summarize a few research efforts in recent decades as follows.

    Note that the problem of predicting AI mostly falls on the right. Unfortunately this doesn't tell us anything about how much harder AI timelines are to predict than other things, or the absolute level of predictive accuracy associated with any combination of features. However if you have a rough idea of how well humans predict things, you might correct it downward when predicting how well humans predict future AI development and its social consequences.
  4. Biases
    As well as just being generally inaccurate, predictions of AI are often suspected to subject to a number of biases. Bostrom claimed earlier that 'twenty years is the sweet spot for prognosticators of radical change' (p4). A related concern is that people always predict revolutionary changes just within their lifetimes (the so-called Maes-Garreau law). Worse problems come from selection effects: the people making all of these predictions are selected for thinking AI is the best things to spend their lives on, so might be especially optimistic. Further, more exciting claims of impending robot revolution might be published and remembered more often. More bias might come from wishful thinking: having spent a lot of their lives on it, researchers might hope especially hard for it to go well. On the other hand, as Nils Nilson points out, AI researchers are wary of past predictions and so try hard to retain respectability, for instance by focussing on 'weak AI'. This could systematically push their predictions later.

    We have some evidence about these biases. Armstrong and Sotala (using the MIRI dataset) find people are especially willing to predict AI around 20 years in the future, but couldn't find evidence of the Maes-Garreau law. Another way of looking for the Maes-Garreau law is via correlation between age and predicted time to AI, which is weak (-.017) in the edited MIRI dataset. A general tendency to make predictions based on incentives rather than available information is weakly supported by predictions not changing much over time, which is pretty much what we see in the MIRI dataset. In the figure below, 'early' predictions are made before 2000, and 'late' ones since then.

    Cumulative distribution of predicted Years to AI, in early and late predictions.

    We can learn something about selection effects from AI researchers being especially optimistic about AI from comparing groups who might be more or less selected in this way. For instance, we can compare most AI researchers - who tend to work on narrow intelligent capabilities - and researchers of 'artificial general intelligence' (AGI) who specifically focus on creating human-level agents. The figure below shows this comparison with the edited MIRI dataset, using a rough assessment of who works on AGI vs. other AI and only predictions made from 2000 onward ('late'). Interestingly, the AGI predictions indeed look like the most optimistic half of the AI predictions. 

    Cumulative distribution of predicted date of AI, for AGI and other AI researchers

    We can also compare other groups in the dataset - 'futurists' and other people (according to our own heuristic assessment). While the picture is interesting, note that both of these groups were very small (as you can see by the large jumps in the graph). 

    Cumulative distribution of predicted date of AI, for various groups

    Remember that these differences may not be due to bias, but rather to better understanding. It could well be that AGI research is very promising, and the closer you are to it, the more you realize that. Nonetheless, we can say some things from this data. The total selection bias toward optimism in communities selected for optimism is probably not more than the differences we see here - a few decades in the median, but could plausibly be that large.

    These have been some rough calculations to get an idea of the extent of a few hypothesized biases. I don't think they are very accurate, but I want to point out that you can actually gather empirical data on these things, and claim that given the current level of research on these questions, you can learn interesting things fairly cheaply, without doing very elaborate or rigorous investigations.
  5. What definition of 'superintelligence' do AI experts expect within two years of human-level AI with probability 10% and within thirty years with probability 75%?
    “Assume for the purpose of this question that such HLMI will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?” See the paper for other details about Bostrom and Müller's surveys (the ones in the book).

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some taken from Luke Muehlhauser's list:

  1. Instead of asking how long until AI, Robin Hanson's mini-survey asks people how far we have come (in a particular sub-area) in the last 20 years, as a fraction of the remaining distance. Responses to this question are generally fairly low - 5% is common. His respondents also tend to say that progress isn't accelerating especially. These estimates imply that any given sub-area of AI, human-level ability should be reached in about 200 years, which is strongly at odds with what researchers say in the other surveys. An interesting project would be to expand Robin's survey, and try to understand the discrepancy, and which estimates we should be using. We made a guide to carrying out this project.
  2. There are many possible empirical projects which would better inform estimates of timelines e.g. measuring the landscape and trends of computation (MIRI started this here, and made a project guide), analyzing performance of different versions of software on benchmark problems to find how much hardware and software contributed to progress, developing metrics to meaningfully measure AI progress, investigating the extent of AI inspiration from biology in the past, measuring research inputs over time (e.g. a start), and finding the characteristic patterns of progress in algorithms (my attempts here).
  3. Make a detailed assessment of likely timelines in communication with some informed AI researchers.
  4. Gather and interpret past efforts to predict technology decades ahead of time. Here are a few efforts to judge past technological predictions: Clarke 1969Wise 1976, Albright 2002, Mullins 2012Kurzweil on his own predictions, and other people on Kurzweil's predictions
  5. Above I showed you several rough calculations I did. A rigorous version of any of these would be useful.
  6. Did most early AI scientists really think AI was right around the corner, or was it just a few people? The earliest survey available (Michie 1973) suggests it may have been just a few people. For those that thought AI was right around the corner, how much did they think about the safety and ethical challenges? If they thought and talked about it substantially, why was there so little published on the subject? If they really didn’t think much about it, what does that imply about how seriously AI scientists will treat the safety and ethical challenges of AI in the future? Some relevant sources here.
  7. Conduct a Delphi study of likely AGI impacts. Participants could be AI scientists, researchers who work on high-assurance software systems, and AGI theorists.
  8. Signpost the future. Superintelligence explores many different ways the future might play out with regard to superintelligence, but cannot help being somewhat agnostic about which particular path the future will take. Come up with clear diagnostic signals that policy makers can use to gauge whether things are developing toward or away from one set of scenarios or another. If X does or does not happen by 2030, what does that suggest about the path we’re on? If Y ends up taking value A or B, what does that imply?
  9. Another survey of AI scientists’ estimates on AGI timelines, takeoff speed, and likely social outcomes, with more respondents and a higher response rate than the best current survey, which is probably Müller & Bostrom (2014).
  10. Download the MIRI dataset and see if you can find anything interesting in it.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about two paths to the development of superintelligence: AI coded by humans, and whole brain emulation. To prepare, read Artificial Intelligence and Whole Brain Emulation from Chapter 2The discussion will go live at 6pm Pacific time next Monday 29 September. Sign up to be notified here.

What is optimization power, formally?

8 sbenthall 18 October 2014 06:37PM

I'm interested in thinking formally about AI risk. I believe that a proper mathematization of the problem is important to making intellectual progress in that area.

I have been trying to understand the rather critical notion of optimization power. I was hoping that I could find a clear definition in Bostrom's Superintelligence. But having looked in the index at all the references to optimization power that it mentions, as far as I can tell he defines it nowhere. The closest he gets is defining it in terms of rate of change and recalcitrance (pp.62-77). This is an empty definition--just tautologically defining it in terms of other equally vague terms.

Looking around, this post by Yudkowksy, "Measuring Optimization Power" doesn't directly formalize optimization power. He does discuss how one would predict or identify if a system were the result of an optimization process in a Bayesian way:

The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities.  To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around.  This plugs directly into Bayesian updating: it says that highly optimized events are strong evidence for optimization processes that produce them.

This is not, however, a definition that can be used to help identify the pace of AI development, for example. Rather, it is just an expression of how one would infer anything in a Bayesian way, applied to the vague 'optimization process' phenomenon.

Alex Altair has a promising attempt at formalization here but it looks inconclusive. He points out the difficulty of identifying optimization power with just the shift in the probability mass of utility according to some utility function. I may be misunderstanding, but my gloss on this is that defining optimization power purely in terms of differences in probability of utility doesn't say anything substantive about how a process has power. Which is important it is going to be related to some other concept like recalcitrance in a useful way. 

Has there been any further progress in this area?

It's notable that this discussion makes zero references to computational complexity, formally or otherwise. That's notable because the informal discussion about 'optimization power' is about speed and capacity to compute--whether it be brains, chips, or whatever. There is a very well-developed formal theory of computational complexity that's at the heart of contemporary statistical learning theory. I would think that the tools for specifying optimization power would be in there somewhere.

Those of you interested in the historical literature on this sort of thing may be interested in cyberneticist's Rosenblueth, Weiner, and Bigelow's 1943 paper "Behavior, Purpose and Teleology", one of the first papers to discuss machine 'purpose', which they associate with optimization but in the particular sense of a process that is driven by a negative feedback loop as it approaches its goal. That does not exactly square with an 'explosively' teleology. This is one indicator that explosively purposeful machines might be quite rare or bizarre. In general, the 20th century cybernetics movement has a lot in common with contemporary AI research community. Which is interesting, because its literature is rarely directly referenced. I wonder why.

3-day Solstice in Leipzig, Germany: small, nice, very low cost, includes accommodation, 19th-21st Dec

8 chaosmage 09 October 2014 04:38PM

Hi everyone,

like last year, we'll have a Secular Solstice in Leipzig, Germany. You're invited - message me if you'd like to attend.

We have space for about 25 people. So this isn't a huge event like you'd have in NYC - but it is special in a different way, because it goes Friday to Sunday and involves lots of things to do. We have a big and very nice appartment in the center of Leipzig where lots of people can sleep, so spreading this over several days is easy, and an obvious way to kick it up a notch from last year's event.

We'll do some of the beautiful ceremonial pieces and songs from Raymonds Hymnal and ride the same general vide. And on top of that, we'll do freestyle, participatory work in groups where we design ways to celebrate the Solstice, using an Open Space Technology inspired method. After all, we're only getting things started, and surely there are many kinds of celebration to explore. Lets find some of them, try them out together and by comparing effects, help optimize Secular Solstices!

We'll cook together and share the cost for ingredients and drinks - apart from that the event is free. Up to 18 guests can sleep right on the premises - half of them on comfortable beds and mattresses, the rest needs to bring sleeping bags and camping mats. If you really prefer a single or double room, there are fairly cheap hotels and hostels nearby, message me for assistance if necessary.

The outline

Arrivals are Friday 6pm-7:30. We'll have a welcome round and a few things to get us in the mood, then discuss ideas for Solstice activities to explore together. We'll find the most popular ones and get into groups that design them into something they want to share with everyone. Groups should self-organize fairly fluidly, i.e. you can switch groups, steal ideas from each other etc. and get to know each other in the process. So this will basically be a very social evening of preparation for the next day. Also, cooking.

On Saturday we will meet in the morning to plan the day, spend some time decorating and cooking, shopping for stuff groups have found they need to do their things, and probably rehearsing. Groups who are done preparing their thing will in some cases probably prepare another, because that is just what happens. We'll have time to chat and get to know each other better. The ceremonial part starts at sunset and is expected to take several hours. After that we'll party - some people will probably want to stay up all night and welcome the sunrise just like last year.

On Sunday we'll have less cohesion probably, because of high variance in how much people have slept. Still we should be able to come together for feedback discussion, have a nice closing, clean up a bit, and say farewell. If you need more sleep before you get on the road, you're welcome to have it.

Any questions?

Link: Exotic disasters are serious

8 polymathwannabe 06 October 2014 06:14PM

Petrov Day Reminder

8 Eneasz 26 September 2014 01:57PM

9/26 is Petrov Day. It is the time of year where we celebrate the world not being destroyed. Let your friends and family know.



8 snarles 22 September 2014 06:21PM

As seen in other threads, people disagree on whether CEV exists, and if it does, what it might turn out to be.


It would be nice to try to categorize common speculations about CEV.

1a. CEV doesn't exist, because human preferences are too divergent

1b. CEV doesn't even exist for a single human 

1c. CEV does exist, but it results in a return to the status quo

2a. CEV results in humans living in a physical (not virtual reality) utopia

2b. CEV results in humans returning to a more primitive society free of technology

2c. CEV results in humans living together in a simulation world, where most humans do not have god-like power

(the similarity between 2a, 2b, and 2c is that humans are still living in the same world, similar to traditional utopia scenarios)

3. CEV results in a wish for the annihilation of all life, or maybe the universe

4a. CEV results in all humans granted the right to be the god of their own private simulation universe (once we acquire the resources to do so)

4b. CEV can be implemented for "each salient group of living things in proportion to that group's moral weight"

5. CEV results in all humans agreeing to be wireheaded (trope)

6a. CEV results in all humans agreeing to merge into a single being and discarding many of the core features of humankind which have lost their purpose (trope)

6b. CEV results in humans agree to cease their own existence but also creating a superior life form--the outcome is similar to 6a, but the difference is that here, humans do not care about whether they are individually "merged"

7. CEV results in all/some humans willingly forgetting/erasing their history, or being indifferent to preserving history so that it is lost (compatible with all previous tropes)

Obviously there are too many possible ideas (or "tropes") to list, but perhaps we could get a sense of which ones are the most common in the LW community.  I leave it to someone else to create a poll supposing they feel they have a close to complete list, or create similar topics for AI risk, etc.

EDIT: Added more tropes, changed #2 since it was too broad: now #2 refers to CEV worlds where humans live in the "same world"

LessWrong's attitude towards AI research

8 Florian_Dietz 20 September 2014 03:02PM

AI friendliness is an important goal and it would be insanely dangerous to build an AI without researching this issue first. I think this is pretty much the consensus view, and that is perfectly sensible.

However, I believe that we are making the wrong inferences from this.

The straightforward inference is "we should ensure that we completely understand AI friendliness before starting to build an AI". This leads to a strongly negative view of AI researchers and scares them away. But unfortunately reality isn't that simple. The goal isn't "build a friendly AI", but "make sure that whoever builds the first AI makes it friendly".

It seems to me that it is vastly more likely that the first AI will be built by a large company, or as a large government project, than by a group of university researchers, who just don't have the funding for that.

I therefore think that we should try to take a more pragmatic approach. The way to do this would be to focus more on outreach and less on research. It won't do anyone any good if we find the perfect formula for AI friendliness on the same day that someone who has never heard of AI friendliness before finishes his paperclip maximizer.

What is your opinion on this?

Please recommend some audiobooks

7 Delta 10 October 2014 01:34PM

Hi All,

I've got into audiobooks lately and have been enjoying listening to David Fitzgerald's Nailed! and his Heretics Guide to mormonism, along with Greta Christina's "Why Are You Atheists So Angry?" and Laura Bates's "Everyday Sexism" which were all very good. I was wondering what other illuminating and engaging books might be recommended, ideally ones available as audiobooks on audible.

I've already read The Selfish Gene, The God Delusion and God Is Not Great in book form as well, so it might be time for something not specifically religion-related, unless it has some interesting new angle.

After Nailed and Everyday Sexism were really illuminating I'm now thinking there must be lots of other must-read books out there and wondered what people here might recommend. Any suggestions would be appreciated.

Thanks for your time.

October 2014 Bragging thread.

7 Joshua_Blaine 07 October 2014 06:20PM

So, to quote myself all those months ago when I first had this idea (I haven't actually posted one of these since the original. funny how that happens):

In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working onhave already done.This is to cultivate an environment of object level productivity rather than meta-productivity methods.

Anything not mentioned in a previous bragging thread is still fair game, even If it didn't actually happen in the last month.

So, what's the coolest thing you've done this month?

A possible tax efficient swap mechanism for charity

7 blogospheroid 05 October 2014 12:21PM

I had an idea a while ago, which sounded simple to me, but searching with certain keywords did not yield appropriate results, so am presenting it for discussion to LW . Please inform me if something like this is already in existence. Please inform if I need to cross post it on effective altruism forum also, or they share enough users with LW and it need not be repeated.


Two persons A, B living in different tax jurisdictions I and J respectively, want to contribute to organizations M and N qualifying for tax exemption in the other person's jurisdiction. i.e. M qualifies in J and N qualifies in I.  For the purpose of this demo, lets consider they intend to contribute the same amounts.

They "swap" their charities and produce receipts to the effect from the respective organizations.i.e. A contributes to N and B contributes to M. 

This helps them gain 10% to 20% more money when compared to contributing to their preferred charities which do not qualify.

So, the idea is to create a website where people can post such an intent, to contribute to cross-national charities and can  reliably present receipts that will be acceptable to all concerned. 

The main uses i envisage for such swaps would be science supporters in the developing world wanting to contribute to research happening in the developed world swapping with EA's wanting to gain a bigger bang for their buck in the developing world. This potentially reduces the need for a lot of charities to seek out tax exemption in multiple jurisdictions. 

Avenues for further research

Question on the basic idea


  • Do both charities have to be acceptable to both donors or are neutral and maybe even "hostile" swaps possible? How much does that complicate matters
  • A certain cut of the proceeds seems to be the simplest for the website to operate, but will it be acceptable to the users?
  • Might this be construed as being illegal in certain jurisdictions, after all, it is a tax avoidance scheme, to be honest.


Logistics questions


  • The 1:1:1:1 case for person to jurisdictions to causes to "the time of swap" is the simplest. There are many possible complications which can allow more charity to be funneled, but require the website hosts to be exposed to non-trivial amounts of risk. Example in one exchange, more dollars are offered for charity than euro-equivalents while in another swap, more euro-equivalents are offered compared to dollars. This can balance, but it is more complicated.
  • Do the accounts need to "balance"? Will a non 1:1 ratio be acceptable for certain supporters of causes?
  • Foreign exchange fluctuations affect amounts of money donated and may cause some unnecessary heartburn in some cases.
  • Times of feeling charitable may wary and may prevent markets from clearing. Christians may feel more charitable near Christmas or Easter and Muslims during Ramadan.
  • Might this require all charities to get themselves a digital signature? What are the other avenues to getting a reliable receipt from charities?
  • If both payments are routed through the website/entity, then might unnecessary forex changes remove a lot of value? Could crypto-currency style atomic swaps help or would they introduce unnecessary complexity that people would rather not be bothered with.
  • Might it complicate the relationship of donors with charities to the extent that the gain is lost in extra cost to reach out?

If such an institution is not already there, then after legal considerations, I think supporting such a website could be a high value investment for effective altruists as it would lead to a 10% to 20% boost to the charity kitty.

[EDIT : edited a little for clarity and grammar. added one more doubt]


Books on consciousness?

7 mgg 23 September 2014 10:28PM

Does LW have a consensus on which books are worthwhile to read regarding consciousness? I read a small intro (Consciousness: A Very Short Introduction, Susan Blackmore, Oxford University Press), and the summary seems to be "Consciousness is pretty damn weird and no one seems to have much of a handle on it". As a non-technical layman, are there any useful books for me to read on the subject?

(I have started reading Daniel Dennet's Intuition Pumps, and I'm a bit torn. He seems highly respected by good scientists, but I feel that if the book didn't have his name on it, I would be well on my way to dismissing it. Are Dennet's earlier works on consciousness a good read?)

Discussion of "What are your contrarian views?"

7 Metus 20 September 2014 12:09PM

I'd like to use this thread to review the "What are your contrarian views?" thread as the meta discussion there was drowned out by the intended content I feel. What can be done better with the voting system? Should threads like these be a regular occurence? What have you specifically learned from that thread? Did you like it at all?


Usual voting rules apply.

[Link] Why Science Is Not Necessarily Self-Correcting

6 ChristianKl 13 October 2014 01:51PM

Why Science Is Not Necessarily Self-Correcting John P. A. Ioannidis

The ability to self-correct is considered a hallmark of science. However, self-correction does not always happen to scientific evidence by default. The trajectory of scientific credibility can fluctuate over time, both for defined scientific fields and for science at-large. History suggests that major catastrophes in scientific credibility are unfortunately possible and the argument that “it is obvious that progress is made” is weak. Careful evaluation of the current status of credibility of various scientific fields is important in order to understand any credibility deficits and how one could obtain and establish more trustworthy results. Efficient and unbiased replication mechanisms are essential for maintaining high levels of scientific credibility. Depending on the types of results obtained in the discovery and replication phases, there are different paradigms of research: optimal, self-correcting, false nonreplication, and perpetuated fallacy. In the absence of replication efforts, one is left with unconfirmed (genuine) discoveries and unchallenged fallacies. In several fields of investigation, including many areas of psychological science, perpetuated and unchallenged fallacies may comprise the majority of the circulating evidence. I catalogue a number of impediments to self-correction that have been empirically studied in psychological science. Finally, I discuss some proposed solutions to promote sound replication practices enhancing the credibility of scientific results as well as some potential disadvantages of each of them. Any deviation from the principle that seeking the truth has priority over any other goals may be seriously damaging to the self-correcting functions of science.



View more: Next