Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jan. 26 - Feb. 1, 2015

1 Gondolinian 26 January 2015 12:46AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

What are the resolution limits of medical imaging?

4 oge 25 January 2015 10:57PM

To all my physicists in the house, will it ever be possible for a device to scan the contents of a human head at the molecular level (say, 5 x 5 x 5nm) while the subject is still alive? I don't have a physics background, so if you could also just point me to the materials I need to read to be able to answer the question, that would be wonderful as well.


The background: I want to live to see the far future and so I'm researching the feasibility of alternatives to cryonics that'll let people "back up" themselves at regular intervals rather than at the point of death. If this is even theoretically possible then I can direct my time and donations towards medical imaging researchers. If not then I'll continue to support cryonics and plastination research.


I'm looking forward to your responses!

LINK: Superrationality and DAOs

2 somnicule 24 January 2015 09:47AM

The cryptocurrency ethereum is mentioned here occasionally, and I'm not surprised to see an overlap in interests from that sphere. Vitalik Buterin has recently published a blog post discussing some ideas regarding how smart contracts can be used to enforce superrationality in the real world, and which cases those actually are. 

Weekly LW Meetups

2 FrankAdamek 23 January 2015 07:20PM

This summary was posted to LW Main on January 16th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

New, Brief Popular-Level Introduction to AI Risks and Superintelligence

16 LyleN 23 January 2015 03:43PM

The very popular blog Wait But Why has published the first part of a two-part explanation/summary of AI risks and superintelligence, and it looks like the second part will be focused on Friendly AI. I found it very clear, reasonably thorough and appropriately urgent without signaling paranoia or fringe-ness. It may be a good article to share with interested friends.

Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"

47 ciphergoth 22 January 2015 08:21PM

Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?

Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.

"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21

Purchasing research effectively open thread

7 John_Maxwell_IV 21 January 2015 12:24PM

Many of the biggest historical success stories in philanthropy have come in the form of funding for academic research.  This suggests that the topic of how to purchase such research well should be of interest to effective altruists.  Less Wrong survey results indicate that a nontrivial fraction of LW has firsthand experience with the academic research environment.  Inspired by the recent Elon Musk donation announcement, this is a thread for discussion of effectively using money to enable important, useful research.  Feel free to brainstorm your own questions and ideas before reading what's written in the thread.

The Unique Games Conjecture and FAI: A Troubling Obstacle

0 27chaos 20 January 2015 09:46PM

I am not a computer scientist and do not know much about complexity theory. However, it's a field that interests me, so I occasionally browse some articles on the subject. I was brought to https://www.simonsfoundation.org/mathematics-and-physical-science/approximately-hard-the-unique-games-conjecture/ by a link on Scott Aaronson's blog, and read the article to reacquaint myself with the Unique Games Conjecture, which I had partially forgotten about. If you are not familiar with the UGC, that article will explain it to you better than I can.

One phrase in the article stuck out to me: "there is some number of colors k for which it is NP-hard (that is, effectively impossible) to distinguish between networks in which it is possible to satisfy at least 99% of the constraints and networks in which it is possible to satisfy at most 1% of the constraints". I think this sentence is concerning for those interested in the possibility of creating FAI.

It is impossible to perfectly satisfy human values, as matter and energy are limited, and so will be the capabilities of even an enormously powerful AI. Thus, in trying to maximize human happiness, we are dealing with a problem that's essentially isomorphic to the UGC's coloring problem. Additionally, our values themselves are ill-formed. Human values are numerous, ambiguous, even contradictory. Given the complexities of human value systems, I think it's safe to say we're dealing with a particularly nasty variation of the problem, worse than what computer scientists studying it have dealt with.

Not all specific instances of complex optimization problems are subject to the UGC and thus NP hard, of course. So this does not in itself mean that building an FAI is impossible. Also, even if maximizing human values is NP hard (or maximizing the probability of maximizing human values, or maximizing the probability of maximizing the probability of human values) we can still assess a machine's code and actions heuristically. However, even the best heuristics are limited, as the UGC itself demonstrates. At bottom, all heuristics must rely on inflexible assumptions of some sort.

Minor edits.

Superintelligence 19: Post-transition formation of a singleton

5 KatjaGrace 20 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the nineteenth section in the reading guidepost-transition formation of a singleton. This corresponds to the last part of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: : “Post-transition formation of a singleton?” from Chapter 11


  1. Even if the world remains multipolar through a transition to machine intelligence, a singleton might emerge later, for instance during a transition to a more extreme technology. (p176-7)
  2. If everything is faster after the first transition, a second transition may be more or less likely to produce a singleton. (p177)
  3. Emulations may give rise to 'superorganisms': clans of emulations who care wholly about their group. These would have an advantage because they could avoid agency problems, and make various uses of the ability to delete members. (p178-80) 
  4. Improvements in surveillance resulting from machine intelligence might allow better coordination, however machine intelligence will also make concealment easier, and it is unclear which force will be stronger. (p180-1)
  5. Machine minds may be able to make clearer precommitments than humans, changing the nature of bargaining somewhat. Maybe this would produce a singleton. (p183-4)

Another view

Many of the ideas around superorganisms come from Carl Shulman's paper, Whole Brain Emulation and the Evolution of Superorganisms. Robin Hanson critiques it:

...It seems to me that Shulman actually offers two somewhat different arguments, 1) an abstract argument that future evolution generically leads to superorganisms, because their costs are generally less than their benefits, and 2) a more concrete argument, that emulations in particular have especially low costs and high benefits...

...On the general abstract argument, we see a common pattern in both the evolution of species and human organizations — while winning systems often enforce substantial value sharing and loyalty on small scales, they achieve much less on larger scales. Values tend to be more integrated in a single organism’s brain, relative to larger families or species, and in a team or firm, relative to a nation or world. Value coordination seems hard, especially on larger scales.

This is not especially puzzling theoretically. While there can be huge gains to coordination, especially in war, it is far less obvious just how much one needs value sharing to gain action coordination. There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination. It is also far from obvious that values in generic large minds can easily be separated from other large mind parts. When the parts of large systems evolve independently, to adapt to differing local circumstances, their values may also evolve independently. Detecting and eliminating value divergences might in general be quite expensive.

In general, it is not at all obvious that the benefits of more value sharing are worth these costs. And even if more value sharing is worth the costs, that would only imply that value-sharing entities should be a bit larger than they are now, not that they should shift to a world-encompassing extreme.

On Shulman’s more concrete argument, his suggested single-version approach to em value sharing, wherein a single central em only allows (perhaps vast numbers of) brief copies, can suffer from greatly reduced innovation. When em copies are assigned to and adapt to different tasks, there may be no easy way to merge their minds into a single common mind containing all their adaptations. The single em copy that is best at doing an average of tasks, may be much worse at each task than the best em for that task.

Shulman’s other concrete suggestion for sharing em values is “psychological testing, staged situations, and direct observation of their emulation software to form clear pictures of their loyalties.” But genetic and cultural evolution has long tried to make human minds fit well within strongly loyal teams, a task to which we seem well adapted. This suggests that moving our minds closer to a “borg” team ideal would cost us somewhere else, such as in our mental agility.

On the concrete coordination gains that Shulman sees from superorganism ems, most of these gains seem cheaply achievable via simple long-standard human coordination mechanisms: property rights, contracts, and trade. Individual farmers have long faced starvation if they could not extract enough food from their property, and farmers were often out-competed by others who used resources more efficiently.

With ems there is the added advantage that em copies can agree to the “terms” of their life deals before they are created. An em would agree that it starts life with certain resources, and that life will end when it can no longer pay to live. Yes there would be some selection for humans and ems who peacefully accept such deals, but probably much less than needed to get loyal devotion to and shared values with a superorganism.

Yes, with high value sharing ems might be less tempted to steal from other copies of themselves to survive. But this hardly implies that such ems no longer need property rights enforced. They’d need property rights to prevent theft by copies of other ems, including being enslaved by them. Once a property rights system exists, the additional cost of applying it within a set of em copies seems small relative to the likely costs of strong value sharing.

Shulman seems to argue both that superorganisms are a natural endpoint of evolution, and that ems are especially supportive of superorganisms. But at most he has shown that ems organizations may be at a somewhat larger scale, not that they would reach civilization-encompassing scales. In general, creatures who share values can indeed coordinate better, but perhaps not by much, and it can be costly to achieve and maintain shared values. I see no coordinate-by-values free lunch...


1. The natural endpoint

Bostrom says that a singleton is natural conclusion of long-term trend toward larger scales of political integration (p176). It seems helpful here to be more precise about what we mean by singleton. Something like a world government does seem to be a natural conclusion to long term trends. However this seems different to the kind of singleton I took Bostrom to previously be talking about. A world government would by default only make a certain class of decisions, for instance about global level policies. There has been a long term trend for the largest political units to become larger, however there have always been smaller units as well, making different classes of decisions, down to the individual. I'm not sure how to measure the mass of decisions made by different parties, but it seems like the individuals may be making more decisions more freely than ever, and the large political units have less ability than they once did to act against the will of the population. So the long term trend doesn't seem to point to an overpowering ruler of everything.

2. How value-aligned would emulated copies of the same person be?

Bostrom doesn't say exactly how 'emulations that were wholly altruistic toward their copy-siblings' would emerge. It seems to be some combination of natural 'altruism' toward oneself and selection for people who react to copies of themselves with extreme altruism (confirmed by a longer interesting discussion in Shulman's paper). How easily one might select for such people depends on how humans generally react to being copied. In particular, whether they treat a copy like part of themselves, or merely like a very similar acquaintance.

The answer to this doesn't seem obvious. Copies seem likely to agree strongly on questions of global values, such as whether the world should be more capitalistic, or whether it is admirable to work in technology. However I expect many—perhaps most—failures of coordination come from differences in selfish values—e.g. I want me to have money, and you want you to have money. And if you copy a person, it seems fairly likely to me the copies will both still want the money themselves, more or less.

From other examples of similar people—identical twins, family, people and their future selves—it seems people are unusually altruistic to similar people, but still very far from 'wholly altruistic'. Emulation siblings would be much more similar than identical twins, but who knows how far that would move their altruism?

Shulman points out that many people hold views about personal identity that would imply that copies share identity to some extent. The translation between philosophical views and actual motivations is not always complete however.

3. Contemporary family clans

Family-run firms are a place to get some information about the trade-off between reducing agency problems and having access to a wide range of potential employees. Given a brief perusal of the internet, it seems to be ambiguous whether they do better. One could try to separate out the factors that help them do better or worse.

4. How big a problem is disloyalty?

I wondered how big a problem insider disloyalty really was for companies and other organizations. Would it really be worth all this loyalty testing? I can't find much about it quickly, but 59% of respondents to a survey apparently said they had some kind of problems with insiders. The same report suggests that a bunch of costly initiatives such as intensive psychological testing are currently on the table to address the problem. Also apparently it's enough of a problem for someone to be trying to solve it with mind-reading, though that probably doesn't say much.

5. AI already contributing to the surveillance-secrecy arms race

Artificial intelligence will help with surveillance sooner and more broadly than in the observation of people's motives. e.g. here and here.

6. SMBC is also pondering these topics this week

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. What are the present and historical barriers to coordination, between people and organizations? How much have these been lowered so far? How much difference has it made to the scale of organizations, and to productivity? How much further should we expect these barriers to be lessened as a result of machine intelligence?
  2. Investigate the implications of machine intelligence for surveillance and secrecy in more depth.
  3. Are multipolar scenarios safer than singleton scenarios? Muehlhauser suggests directions.
  4. Explore ideas for safety in a singleton scenario via temporarily multipolar AI. e.g. uploading FAI researchers (See Salamon & Shulman, “Whole Brain Emulation, as a platform for creating safe AGI.”)
  5. Which kinds of multipolar scenarios would be more likely to resolve into a singleton, and how quickly?
  6. Can we get whole brain emulation without producing neuromorphic AGI slightly earlier or shortly afterward? See section 3.2 of Eckersley & Sandberg (2013).
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the 'value loading problem'. To prepare, read “The value-loading problem” through “Motivational scaffolding” from Chapter 12The discussion will go live at 6pm Pacific time next Monday 26 January. Sign up to be notified here.

Optimal eating (or rather, a step in the right direction)

4 c_edwards 19 January 2015 01:35AM

Over the past few months I've been working to optimize my life.  In this post I describe my attempt to optimize my day-to-day cooking and eating - my goal with this post is to get input and to offer a potential template for people who aren't happy with their current cooking/eating patterns.  I'm a) still pretty new to LW, and b) not a nutritionist; I am not claiming that this is optimal, only that it is a step in the right direction for me.  I'd love suggestions/advice/feedback.


How do I quantify a successful cooking/eating plan?


"Healthy" is a broad term.  I'm not interested in making food a complicated or stressful component of my life - quite the opposite.  Healthy means that I feel good, and that I'm providing my body with a good mix of building blocks (carbs, proteins, fats) and nutrients.  This means I want most/all meals to include some form of complex carbs, protein, and either fruits or veggies or both.  As I'm currently implementing an exercise plan based on the LW advice for optimal exercising, I'm aiming to get ~120 grams of protein per day (.64g/lb bodyweight/day).  There seems to be a general consensus that absorption of nutrients from whole foods is a) higher, and b) less dangerous, so when possible I'm trying to make foods from basic components instead of buying pre-processed stuff.

I have a health condition called hypoglycemia (low blood sugar) that makes me cranky/shaky/weak/impatient/foolish/tired when I am hungry, and can be triggered by eating simple sugars.  So, for me personally, a healthy diet includes rarely feeling hungry and rarely eating simple sugars (especially on their own - if eaten with other food the effect is much less severe).  This also means trying to focus on forms of fruit and complex carbs that have low glycemic indexes (yams are better than baked potatoes, for example).  I would guess that these attributes would be valuable for anyone, but for me they are a very high priority.

I'm taking some advice from the "Exos" (formerly Core Performance) fitness program, as described in the book Core performance essentials. One of the suggestions from this that I'm trying to use here (aside from the above complex carb+protein+fruit/veg meal structure) is to "eat the rainbow every day" - that is, mix up the fruits and veggies you eat, ideally getting as many colors per day as possible.  I'm also taking advice from the (awesome) LW article on increasing longevity: "eat fish, nuts, eggs, fruit, dark chocolate."

When possible I'm trying to focus on veggies that are particularly nutrient dense - spinach, bok choy, tomatoes, etc.  I am (for now) avoiding a few food products that I have heard (but have not yet confirmed!) are linked to potential health issues: tofu, whey proteins.  Note that I do not trust my information on the potential risks of these foods, but as neither of these are important to my diet anyways, I have put researching them as a low priority compared to everything else I want to learn.

So to recap: don't stress about it, but try to do complex carbs, proteins (120g/day for me), fruits, and veggies in every meal, avoid sugars where possible (although dark chocolate is good).  Fish, nuts and eggs are high priority proteins.


I'm on a fairly limited budget.  This means trying to focus on the seasonal fruits and veggies (which are typically cheaper, and as an added bonus are likely healthier than the same fruit/veggie when out of season), aiming for less expensive meats, and not trying to eat organically (probably worth a separate discussion of organic vs not, meat vs not).  This also means making my own foods when the price benefit is high and the time cost is low.  I often make my own breads, for example (using a breadmaker) - it takes about 10 minutes of my time, directly saves me about 3+ dollars or so compared to an equivalent quality loaf of bread (many breads can be made for ~$.50-1$), plus saves me either the time of shopping multiple times per week to obtain fresh bread or the grossness of eating bread that I've frozen to keep it from molding.  Additionally, my budget means that I prefer that my weekly meal plan not depend on eating out or buying pre-made foods.


While I'm on a fairly limited monetary budget, I'm on a very limited time budget.  Cooking can be fun for me, but I prefer that my weekly schedule not REQUIRE much time - I can always replace a quick meal with a longer fun one if I feel like it.

The Plan

My general approach is split my meals between really quick-and-easy (like chickpeas, canned salmon, and olive oil over prewashed spinach with an apple or two on the side) and batch foods where a somewhat longer time investment is split over many nights (like lentil stew in a crockpot).

To keep myself reasonable full I need about 6-7 meals per day: breakfast, snack, lunch, (optional snack depending on schedule), post-workout snack, dinner, snack.  These don't all need to be large, but I'm unhappy/unproductive without something for each of those meals, so I might as well make it easy to eat them.

In general I've found the following system to fulfill my criteria of success (healthy, cheap, quick), and it's been much less stressful to have a general plan in place - I can more easily figure out my shopping list, and it's not hard to ensure I always have food ready when I need it.


Quick and easy is the key here.  I typically have either


  1. Yogurt with sunflower seeds and/or nuts, a handful of rolled oats (yes, uncooked, but add a bit of water at the end to make them tolerable), and sometimes some fruit on top.  Add honey for sweetener as needed (I typically don't do to hypoglycemia).
  2. Bread (often homemade, but whatever floats your boat) with some peanut butter on top, a banana or other fruit item on the side.
  3. (if I have the time) Scrambled eggs mixed with chopped broccoli or bell peppers, bread, and a piece of fruit.
(also a big glass of water, which everyone seems to think is important)(also coffee, although I'm considering transitioning to a different caffeine source.



I have three "batch" meals here (I make enough for 3+lunches, so I cook lunches ~twice a week):


  1. salmon mash plus "spinach salad" (spinach with olive oil and either lemon juice or balsamic vinegar), fruit item.  salmon mash is a mix of cooked rice, canned salmon, black olives (for flavor - not sure that they're useful nutritionally), canned black or garbanzo beans, pasta sauce.  It sounds disgusting, but I find it pretty decent, and it's very cheap and filling, and super balanced in terms of carbs and proteins.  I do proportions of 1 cup rice, 1 large can salmon, 1-2 cans beans, 1/2 can black olives, 1/2 can pasta sauce (typically I do a double batch, which lasts me about 4-5 lunches.  Your mileage may vary)
  2. Baked yams and boneless skinless chicken breasts plus spinach salad or other veggies, fruit item
  3. pasta salad: pasta, raw chopped broccoli, tomatoes (grape/cherry tomatoes are easiest), chopped bell peppers, sliced ham, olives (for flavor again - not important nutritionally, I think), and some olive oil (you could use Caesar salad dressing if you like more flavor).  
If I haven't prepped a batch lunch, I just put salmon and beans on top of spinach, add a little olive oil, and throw in a slice of bread and a fruit on the side. Alternately, PBJ plus veggie and fruit.



I aim to make one batch dinner per week and have it last for 4-5 meals, and then have several quick-and-easy dinners to fill the gap (this also makes it easy to accommodate dinners out or food related social gatherings).

Some ideas for Batch Dinners (crock pots are your friends here):


  • Lentil stew, bread, sliced carrots or bell peppers, fruit item (apple, banana, grapefruit, whatever).  That lentil soup recipe is ridiculously cheap, healthy, and quite tasty.
  • The potato-and-cabbage based rumpledethumps recipe (which freezes very well - make a huge batch and throw half of it in the freezer), plus a meat of some sort, a fruit item and maybe a vegetable something 
  • Other crock pot soups: chicken tortilla soup, chili, stew.  Add a veggie on the side, a fruit item, and maybe a slice of bread.
  • Large stirfry (these often take a bit longer than crock pot meals), rice or noodles, fruit on the side.
Note that since I only make one batch dinner per week, those bullets are sufficient to cover a month (and depending on what your tolerance for repetition is, that might be enough for years).

Some ideas for quick-and-easy dinners:
  • Salad made from salad greens, some form of precooked meat (salmon is good), beans, maybe sliced avacado and tomato, maybe sunflower seeds.
  • Rice/pasta; scrambled/cooked eggs or baked chicken; munching veggie like carrots, raw broccoli, bell pepper; fruit item.  Note on chicken: while there is a reasonably large elapse time from start to finish, your involvement doesn't need to take long.  Typically I have a bunch of boneless skinless chicken breasts in the freezer - pull one out, throw it in a ziplock with soy sauce, garlic powder, ginger (or whatever other marinade you prefer), put the ziplock in a bowl of warm water, preheat oven to 370ish.  Once chicken is thawed, put in a pan and cook in the oven.  Ideally do enough rice/pasta and chicken for several nights.



In general my snacks are super simple: just combine some kind of munching veggie (carrots, bell pepper, raw broccoli, snap peas, etc) with hummus, some fruit item, something protein-y (handful of nuts or sunflower seeds, usually) and (optionally) a slice of bread or other carb source.  For whatever snack I have after a workout, I want to make sure there is plenty of protein, so I include either hard boiled eggs, baked chicken, or salmon (on bread).


So over the weekend, when I plan my week and go shopping, I choose the following:


  1. One batch dinner to cook (usually I need to buy the stuff for this)
  2. One type of quick-and-easy dinner to eat for 2-3 nights (often using staples/leftovers I already have)
  3. Two types of batch lunch to make from my list of three.
  4. 2-3 kinds of munching veggies - enough veggies total to include in ~3 meals per day (so like 6ish carrots per day, or 2 bell peppers, etc).  Think carrots, raw broccoli, bell peppers, green beans, sugar snap peas, cherry tomatoes, etc.
  5. 2-3 kinds of fruit items.  Think apples, bananas, grapefruit, grapes, oranges, etc.
  6. Two kinds of protein for post-workout snacks, chosen from: eggs, chicken, salmon
  7. Bread recipes to make 2-3 loaves (which might just be a single recipe repeated)
I also make sure I have enough yogurt and other breakfast supplies to get me through the week.  I drink milk with most of my meals at home, so I check my milk supply as well.

Boom!  Planning done, shopping list practically writes itself!  Once per week I make an small effort on cooking a batch dinner, two or three nights per week I put an extremely minimal effort into quick-and-easy dinners, two evenings per week I make a batch of lunch foods and maybe prep workout protein (hard boil eggs or bake chicken breasts), and otherwise my "cooking" consists of taking things from the fridge and putting them onto a dish (and possibly microwaving).




I'm still tweaking my system, but it has been a marked improvement from the last-minute scrabbling and suboptimal meals that tended to characterize my eating before this.  It's also a big step up in terms of utility from the more elaborate and time-consuming meals I sometimes cooked to compensate for feelings of inadequacy generated by aforementioned scrabbling/suboptimal meals.  I tend to feel fairly energetic and healthy, and it's a huge reassurance to me to know that I always have food planned out and typically it's available to me without needing to do any cooking.  It appears that it's considerably cheaper, too, although there are several confounding factors that would also drive my grocery bills down (transitioning to not-organic foods, trying to hit sales, etc).

Are there things I'm missing?  Suggestions for meals?  (note that I'm a bit wary of meal-replacement shakes) Alternative systems that people have found to hit that sweet spot of healthy, quick, and inexpensive? Is this something that might be useful for you?

EDIT:  Tuna is high in mercury, and shouldn't be eaten in nearly the quantities I had originally planned.  I've replaced canned tuna with canned salmon.

Open thread, Jan. 19 - Jan. 25, 2015

3 Gondolinian 19 January 2015 12:04AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

[Link] An argument on colds

14 Konkvistador 18 January 2015 07:16PM


It's illegal to work around food when showing symptoms of contagious diseases. Why not the same for everyone else? Each person who gets a cold infects one other person on average. We could probably cut infection rates and the frequency of colds in half if sick people didn't come in to work.

And if we want better biosecurity, why not also require people to be able to reschedule flights if a doctor certifies they have a contagious disease?

Due to the 'externalities', the case seems very compelling.

Moving my commentary to a separate comment, so as to disambiguate votes on my commentary and the original argument.

LINK: Diseases not sufficiently researched

2 polymathwannabe 17 January 2015 04:03PM

This Chart Shows The Worst Diseases That Don't Get Enough Research Money

We have already covered this topic several times on LW, but what prompted me to link this was this remark:

Of course, where research dollars flow isn't —and shouldn't be— dictated simply in terms of which diseases lay claim to the most years, but also by, perhaps most importantly, where researchers see the most potential for a breakthrough.

[Edit: a former, dumber version of me had asked, "I wonder what criterion the author would prefer," before the correct syntax of the sentence was pointed out to me.]


... And Everyone Loses Their Minds

9 Ritalin 16 January 2015 11:38PM

Chris Nolan's Joker is a very clever guy, almost Monroesque in his ability to identify hypocrisy and inconsistency. One of his most interesting scenes in the film has him point out how people estimate horrible things differently depending on whether they're part of what's "normal", what's "expected", rather than on how inherently horrifying they are, or how many people are involved.

Soon people extrapolated this observation to other such apparent inconsistencies in human judgment, where a behaviour that once was acceptable, with a simple tweak or change in context, becomes the subject of a much more serious reaction.

I think there's rationalist merit in giving these inconsistencies a serious look. I intuit that there's some sort of underlying pattern to them, something that makes psychological sense, in the roundabout way that most irrational things do. I think that much good could come out of figuring out what that root cause is, and how to predict this effect and manage it.

Phenomena that come to mind, are, for instance, from an Effective Altruism point of view, the expenses incurred in counter-terrorism (including some wars that were very expensive in treasure and lives), and the number of lives said expenses save, compared with the number of lives that could be saved by spending that same amount into improving road safety, increasing public helathcare expense where it would do the most good, building better lightning rods (in the USA you're four times more likely to be struck by thunder than by terrorists), or legalizing drugs.

What do y'all think? Why do people have their priorities all jumbled-up? How can we predict these effects? How can we work around them?

New LW Meetup: Dallas

2 FrankAdamek 16 January 2015 05:11PM

This summary was posted to LW Main on January 9th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

LINK: Guinea worm disease close to eradication

4 polymathwannabe 16 January 2015 04:08PM

The disease, that is, maybe not the worm itself. Anyway, Team Human scores its second point against Team Disease:


Slides online from "The Future of AI: Opportunities and Challenges"

12 ciphergoth 16 January 2015 11:17AM

In the first weekend of this year, the Future of Life institute hosted a landmark conference in Puerto Rico: "The Future of AI: Opportunities and Challenges". The conference was unusual in that it was not made public until it was over, and the discussions were under Chatham House rules. The slides from the conference are now available. The list of attenders includes a great many famous names as well as lots of names familiar to those of us on Less Wrong: Elon Musk, Sam Harris, Margaret Boden, Thomas Dietterich, all three DeepMind founders, and many more.

This is shaping up to be another extraordinary year for AI risk concerns going mainstream!

Learn Three Things Every Day

-6 helltank 16 January 2015 09:36AM

In the Game of Thrones series, there is an ongoing side plot in which a character is trained by a secretive organization to become an assassin. As part of her training, one of the senior assassins demands that she report to him three new things she has learnt every day. by making a natural inference from the title of the article, you might infer or assume that I am going to suggest that you do the same. I am, but with a crucial difference.

You see, my standards are higher than the Faceless Men. Instead of filling up your list of learnt things with only marginally useful things like gossip or other insignificant things, I am going to take it up a notch and demand that you learn three USEFUL things a day. This is, of course, an entirely self-enforced challenge, and I'll let you decide on the definition of useful. Personally, I use the condition of [>50% probability that X will enrich my life in a significant way], but if you want, you can make up your own criteria for "useful".

This may seem trite or useless, or even obvious(if you're an eager and fast learner, like most LWers). Now stop and think hard. For the entire of the past 30 days, have you ever had a day or two where you just slacked off and didn't learn much? Maybe it was New Year's Day, or your birthday, and instead of learning you decided to spend the whole day partying. Perhaps it was just a lazy Sunday and you couldn't be bothered to learn something and instead just spent the day playing video games or mountain skiing(although there are useful things to be learnt from those, too) or whatever you like to do in your spare time.

I haven't taken an official survey, but my belief(and do correct me if I am very wrong about this) is that on average there's at least one day in thirty in which you did not learn thirty new, useful things. I would consider that day as pretty much wasted from a truth-seeker's point of view. You did not move forward in your quest for knowledge, you did not sharpen your rationality skills(and they always need sharpening, no matter how good you are) and you did not become stronger mentally. That's 12 days in a year, which is more than enough for the average LWer to pick up at least one new skill: say, learning about game theory, to pick a random example. In that year, you have had a chance to gain the knowledge of game theory, and you threw it away.

The point of this exercise is not to make you sweat and do a "mental workout" every day. The point is to prevent days that are wasted. There is a nearly infinite amount of knowledge to collect, and we do not have nearly infinite time. Maybe it's just my Asian mentality speaking here, but every second counts and you are in effect racing against time to gain as much knowledge as possible and put it to good use before you die.

When doing this, you are not allowed to merely work on your projects, unless they also teach you something. If you are a non-programmer, and you begin learning Python, that's a new thing. If you're already fluent in Python, and you program in Python, that's not counted. With one exception: if you learn something through programming(maybe you thought up a nifty new way to sanitize user inputs while working on a database) then that counts. If you're a writer, and you write, that doesn't count. Unless, of course, by writing you learn things about worldbuilding, or plot development, or character development, that you didn't know before. Yes, this counts, even though it's not directly rationality-related, because it enriches your life: it helps you achieve your writing goals(that's also a good condition for usefulness, and is a good example of instrumental rationality).

Today, I've learn about the concept of centered worlds, I have learnt about the policy of indifference in similar worlds and I have learnt the technique of "super-rationality" as a means to predict the behavior of other agents in acausal trade. What have you learnt today?

Do it now. Don't wait, or you will waste this day, which is 86400 countable seconds in which to learn things. In fact, I've given you a head start today, because you can count this article in your list of learnt things.

Good luck to you. Let's learn together.

[This is my first post on LW and I hope that I taught you something interesting and useful. Again, I'm new to posting, so if I violated some unspoken rule of etiquette, or if you think this post is obvious and shitty, feel free to vote me down. But do leave a comment explaining why you did, so I can add it to my list of learnt things.]

An example and discussion of extension neglect

10 emr 16 January 2015 06:10AM

I recently used an automatic tracker to learn how I was spending my time online. I learned that my perceptions were systemically biased: I spend less time than I thought on purely non-productive sites, and far more time on sites that are quasi-productive.

For example, I felt that I was spending too much time reading the news, but I learned that I spend hardly time doing so. I didn't feel that I was spending much time reading Hacker News, but I was spending a huge amount of time there!

Is this a specific case of a more general error?

A general framing: "Paying too much attention to the grouping whose items have the most extreme quality, when the value of focusing on this grouping is eclipsed by the value of focusing on a larger grouping of less extreme items".

So in this case, once I had formed the desire to be more productive, I overestimated how much potential productive time I could gain by focusing on those sites that I felt were maximally non-productive, and underestimated the potential of focusing on marginally more productive sites.

In pseudo-technical terms: We think about items in groups. But then we think of the total value of a group as being closer to average_value than to average_value * size_of_group.

This falls under the category of Extension Neglect, which includes errors caused by ignoring the size of a set. Other patterns in this category are:

  • Base rate neglect: Inferring the category of an item as if all categories were the same size.
  • The peak-end rule: Giving the value of the ordered group as a function of max_value and end_value.
  • Not knowing how set size interacts with randomness.

For the error given above, some specific examples might be:

  • Health: Focusing too much on eating desert at your favorite restaurant; and not enough on eating pizza three times a week.
  • Love: Fights and romantic moments; daily interaction.
  • Stress: Public speaking; commuting
  • Ethics: Improbable dilemmas; reducing suffering (or doing anything externally visible)
  • Crime: Serial killers; domestic violence


Group Rationality Diary, January 16-31

2 therufs 16 January 2015 01:54AM

This is the public group rationality diary for January 16-31.

It's a place to record and chat about it if you have done, or are actively doing, things like: 

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: January 1-15

Rationality diaries archive

Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial

51 ciphergoth 15 January 2015 04:33PM

We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. 

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity." 

[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]

The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy  (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna. 

[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories. 

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015

Je suis Charlie

-18 loldrup 15 January 2015 08:27AM

After the terrorist attacks at Charlie Hebdo, conspiracy theories quickly arose about who was behind the attacks.
People who are critical to the west easily swallow such theories while pro-vest people just as easily find them ridiculous.

I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.

Yet very few people do so. People are guided by their previous understanding of the world, when judging new information. It sounds like a fine Bayesian approach for getting through life, but for real scientific knowledge, we can't rely on *prior* reasonings (even though these might involve Bayesian reasoning). Real science works by investigating evidence.

So, how do we characterise the human tendency to jump to conclusions that have simply been supplied by their sense of normativity. Is their a previously described bias that covers this case?

Selfish preferences and self-modification

4 Manfred 14 January 2015 08:42AM

One question I've had recently is "Are agents acting on selfish preferences doomed to having conflicts with other versions of themselves?" A major motivation of TDT and UDT was the ability to just do the right thing without having to be tied up with precommitments made by your past self - and to trust that your future self would just do the right thing, without you having to tie them up with precommitments. Is this an impossible dream in anthropic problems?


In my recent post, I talked about preferences where "if you are one of two copies and I give the other copy a candy bar, your selfish desires for eating candy are unfulfilled." If you would buy a candy bar for a dollar but not buy your copy a candy bar, this is exactly a case of strategy ranking depending on indexical information.

This dependence on indexical information is inequivalent with UDT, and thus incompatible with peace and harmony.


To be thorough, consider an experiment where I am forked into two copies, A and B. Both have a button in front of them, and 10 candies in their account. If A presses the button, it deducts 1 candy from A. But if B presses the button, it removes 1 candy from B and gives 5 candies to A.

Before the experiment begins, I want my descendants to press the button 10 times (assuming candies come in units such that my utility is linear). In fact, after the copies wake up but before they know which is which, they want to press the button!

The model of selfish preferences that is not UDT-compatible looks like this: once A and B know who is who, A wants B to press the button but B doesn't want to do it. And so earlier, I should try and make precommitments to force B to press the button.

But suppose that we simply decided to use a different model. A model of peace and harmony and, like, free love, where I just maximize the average (or total, if we specify an arbitrary zero point) amount of utility that myselves have. And so B just presses the button.

(It's like non-UDT selfish copies can make all Pareto improvements, but not all average improvements)


Is the peace-and-love model still a selfish preference? It sure seems different from the every-copy-for-themself algorithm. But on the other hand, I'm doing it for myself, in a sense.

And at least this way I don't have to waste time with precomittment. In fact, self-modifying to this form of preferences is such an effective action that conflicting preferences are self-destructive. If I have selfish preferences now but I want my copies to cooperate in the future, I'll try to become an agent who values copies of myself - so long as they date from after the time of my self-modification.


If you recall, I made an argument in favor of averaging the utility of future causal descendants when calculating expected utility, based on this being the fixed point of selfish preferences under modification when confronted with Jan's tropical paradise. But if selfish preferences are unstable under self-modification in a more intrinsic way, this rather goes out the window.


Right now I think of selfish values as a somewhat anything-goes space occupied by non-self-modified agents like me and you. But it feels uncertain. On the mutant third hand, what sort of arguments would convince me that the peace-and-love model actually captures my selfish preferences?

Quantum cat-stencil interference projection? What is this?

5 pre 14 January 2015 12:06AM

Sorry I don't hang around here much. I keep meaning to. You're still the ones I come to when I have no clue at all what a quantum-physics article I come across means though.


So. Um. What?

They have some kind of double-slit experiment that gets double-slitted again then passed through a stencil before being recombined and recombined again to give a stencil-shaped interference pattern?

Is that even right?

Can someone many-worlds-interpretation describe that at me, even if it turns out its just a thought-experiment with a graphics mock-up?

I'm the new moderator

83 NancyLebovitz 13 January 2015 11:21PM

Viliam Bur made the announcement in Main, but not everyone checks main, so I'm repeating it here.

During the following months my time and attention will be heavily occupied by some personal stuff, so I will be unable to function as a LW moderator. The new LW moderator is... NancyLebovitz!

From today, please direct all your complaints and investigation requests to Nancy. Please not everyone during the first week. That can be a bit frightening for a new moderator.

There are a few old requests I haven't completed yet. I will try to close everything during the following days, but if I don't do it till the end of January, then I will forward the unfinished cases to Nancy, too.

Long live the new moderator!

Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance

2 Ander 13 January 2015 08:02PM

LessWrong is where I learned about Bitcoin, several years ago, and my greatest regret is that I did not investigate it more as soon as possible, that people here did not yell at me louder that it was important, and to go take a look at it.  In that spirit, I will do so now.


First of all, several caveats:

* You should not go blindly buying anything that you do not understand.  If you don't know about Bitcoin, you should start by reading about its history, read Satoshi's whitepaper, etc.  I will assume that hte rest of the readers who continue reading this have a decent idea of what Bitcoin is.

* Under absolutely no circumstances should you invest money into Bitcoin that you cannot afford to lose.  "Risk money" only!  That means that if you were to lose 100% of you money, it would not particularly damage your life.  Do not spend money that you will need within the next several years, or ever.  You might in fact want to mentally write off the entire thing as a 100% loss from the start, if that helps.

* Even more strongly, under absolutely no circumstances whatsoever will you borrow money in order to buy Bitcoins, such as using margin, credit card loans, using your student loan, etc.  This is very much similar to taking out a loan, going to a casino and betting it all on black on the roulette wheel.  You would either get very lucky or potentially ruin your life.  Its not worth it, this is reality, and there are no laws of the universe preventing you from losing.

* This post is not "investment advice".

* I own Bitcoins, which makes me biased.  You should update to reflect that I am going to present a pro-Bitcoin case.


So why is this potentially a time to buy Bitcoins?  One could think of markets like a pendulum, where price swings from one extreme to another over time, with a very high price corresponding to over-enthusiasm, and a very low price corresponding to despair.  As Warren Buffett said, Mr. Market is like a manic depressive.  One day he walks into your office and is exuberant, and offers to buy your stocks at a high price.  Another day he is depressed and will sell them for a fraction of that. 

The root cause of this phenomenon is confirmation bias.  When things are going well, and the fundamentals of a stock or commodity are strong, the price is driven up, and this results in a positive feedback loop.  Investors receive confirmation of their belief that things are going good from the price increase, confirming their bias.  The process repeats and builds upon itself during a bull market, until it reaches a point of euphoria, in which bad news is completely ignored or disbelieved in.

The same process happens in reverse during a price decline, or bear market.  Investors receive the feedback that the price is going down => things are bad, and good news is ignored and disbelieved.  Both of these processes run away for a while until they reach enough of an extreme that the "smart money" (most well informed and intelligent agents in the system) realizes that the process has gone too far and switches sides. 


Bitcoin at this point is certainly somewhere in the despair side of the pendulum.  I don't want to imply in any way that it is not possible for it to go lower.  Picking a bottom is probably the most difficult thing to do in markets, especially before it happens, and everyone who has claimed that Bitcoin was at a bottom for the past year has been repeatedly proven wrong.  (In fact, I feel a tremendous amount of fear in sticking my neck out to create this post, well aware that I could look like a complete idiot weeks or months or years from now and utterly destroy my reputation, yet I will continue anyway).


First of all, lets look at the fundamentals of Bitcoin.  On one hand, things are going well. 


Use of Bitcoin (network effect):

One measurement of Bitcoin's value is the strenght of its network effect.  By Metcalfe's law, the value of a network is proporitonal to the square of the number of nodes in the network. 


Over the long term, Bitcoin's price has generally followed this law (though with wild swings to both the upside and downside as the pendulum swings). 

In terms of network effect, Bitcoin is doing well.


Bitcoin transactions are hitting all time highs:  (28 day average of number of transactions).



Number of Bitcoin addresses are hitting all time highs:



Merchant adoption continues to hit new highs:

BitPay/Coinbase continue to report 10% monthly growth in the number of merchants that accept Bitcoin.

Prominent companies that began accepting Bitcoin in the past year include: Dell, Overstock, Paypal, Microsoft, etc.


On the other hand, due to the sustained price decline, many Btcoin businesses that started up in the past two years with venture capital funding have shut down.  This is more of an effect of the price decline than a cause however.  In the past month especially there has been a number of bearish news stories, such as Bitpay laying off employees, exchanges Vault of Satoshi and CEX.io deciding to shut down, exchange Bitstamp being hacked and shut down for 3 days, but ultimately is back up without losing customer funds, etc.


The cost to mine a Bitcoin is commonly seen as one indicator of price.   Note that the cost to mine a Bitcoin does not directly determine the *value* or usefulness of a Bitcoin.   I do not believe in the labor theory of value: http://en.wikipedia.org/wiki/Labor_theory_of_value

However, there is a stabilizing effect in commodities, in which over time, the price of an item will often converge towards the cost to produce it due to market forces. 


If a Bitcoin is being priced at a value much greater than the cost (in mining equipment and electricity) to create it, people will invest in mining equipment.  This results in increased 'difficulty' of mining and drives down the amount of Bitcoin that you can create with a particular piece of mining equipment.  (The amount of Bitcoins created is a fixed amount per unit of time, and thus the more mining equipment that exists, the less Bitcoin each miner will get).

If Bitcoin is being priced at a value below the cost to create it, people will stop investing in mining equipment.  This may be a signal that the price is getting too low, and could rise.


Historically, the one period of time where Bitcoin was priced significantly below the cost to produce it was in late 2011.  It was noted on LessWrong.  The price has not currently fallen to quite the same extent as it did back then (which may indicate that it has further to fall), however the current price relative to the mining cost indicates we are very much in the bearish side of the pendulum.


It is difficult to calculate an exact cost to mine a Bitcoin, because this depends on the exact hardware used, your cost of electricity, and a prediction of the future difficulty adjustments that will occur.  However, we can make estimates with websites such as http://www.vnbitcoin.org/bitcoincalculator.php

According to this site, every available Bitcoin miner will never give you back as much money as it cost, factoring in the hardware cost and electricity cost.   Upcoming more efficient miners which have not yet released yet are estimated to pay off in about a year, if difficulty grows extremely slowly, and that is for upcoming technology which has not yet even been released. 


There are two important breakpoints when discussing Bitcoin mining profitability.  The first is the point at which your return is enough that it pays for both the electricity and the hardware.  The second is the point at which you make more than your electricity costs, but cannot recover the hardware cost.


For example, lets say Alice pays $1000 on Bitcoin mining equipment.  Every day, this mining equipment can return $10 worth of Bitcoin, but it costs $5 of electricity to run.  Her gain for the day is $5, and it would take 200 days at this rate before the mining equipment paid for itself.  Once she has made the decision to purchase the mining equipment, the money spent on the miner is a sunk cost.  The money spent on electricity is not a sunk cost, she continues to have the decision every day of whether or not to run her mining equipment.  The optimal decision is to continue to run the miner as long as it returns more than the electricity cost. 

Over time, the payout she will receive from this hardware will decline, as the difficulty of mining Bitcoin increases.  Eventually, her payout will decline below the electricity cost, and she should shut the miner down.  At this point, if her total gain from running the equipment was higher than the hardware cost, it was a good investment.  If it did not recoup its cost, then it was worse than simply spending the money buying Bitcoin on an exchange in the first place.


This process creates a feedback into the market price of Bitcoins.  Imagine that Bitcoin investors have two choices, either they can buy Bitcoins (the commodity which has already been produced by others), or they can buy miners, and produce Bitcoins for themself.   If the Bitcoin price falls sufficiently that mining equipment will not recover its costs over time, investment money that would have gone into miners instead goes into Bitcoin, helping to support the price.  As you can see from mining cost calculators, we have passed this point already.  (In fact, we passed it months ago already).


The second breakpoint is when the Bitcoin price falls so low that it falls below the electricity cost of running mining equipment.  We have passed this point for many of the less efficient ways to mine.  For example, Cointerra recently shut down its cloud mining pool because it was losing money.  We have not yet passed this point for more recent and efficient miners, but we are getting fairly close to it. Crossing this point has occurred once in Bitcoin's history, in late 2011 when the price bottomed out near $2, before giving birth to the massive bull run of 2012-2013 in which the price rose by a factor of 500.


Market Sentiment: 

I was not active in Bitcoin back in 2011, so I cannot compare the present time to the sentiment at the November 2011 bottom.  However, sentiment currently is the worst that I have seen by a significant margin. Again, this does not mean that things could not get much, much worse before they get better!  After all, sentiment has been growing worse for months now as the price declines, and everyone who predicted that it was as bad as it could get and the price could not possibly go below $X has been wrong.  We are in a feedback loop which is strongly pumping bearishness into all market participants, and that feedback loop can continue and has continued for quite a while.


A look at market indicators tells us that Bitcoin is very, very oversold, almost historically oversold.  Again, this does not mean that it could not get worse before it gets better. 


As I write this, the price of Bitcoin is $230.  For perspective, this is down over 80% from the all time high of $1163 in November 2013.  It is still higher than the roughly $100 level it spent most of mid 2013 at.

* The average price of a Bitcoin since the last time it moved is $314.


The current price is a multiple of .73 of this price.  This is very low historically, but not the lowest it has ever ben.  THe lowest was about .39 in late 2011. 


* Short interest (the number of Bitcoins that were borrowed and sold, and must be rebought later) hit all time highs this week, according to data on the exchange Bitfinex, at more than 25000 Bitcoins sold short:



* Weekly RSI (relative strength index), an indicator which tells if a stock or commodity is 'overbought' or 'oversold' relative to its history, just hit its lowest value ever.


Many indicators are telling us that Bitcoin is at or near historical levels in terms of the depth of this bear market.  In percentage terms, the price decline is surpassed only by the November 2011 low.  In terms of length, the current decline is more than twice as long as the previous longest bear market.


To summarize: At the present time, the market is pricing in a significant probability that Bitcoin is dying.

But there are some indicators (such as # of transactions) which say it is not dying.  Maybe it continues down into oblivion, and the remaining fundamentals which looked bullish turn downwards and never recover.  Remember that this is reality, and anything can happen, and nothing will save you.



Given all of this, we now have a choice.  People have often compared Bitcoin to making a bet in which you have a 50% chance of losing everything, and a 50% chance of making multiples (far more than 2x) of what you started with. 

There are times when the payout on that bet is much lower, when everyone is euphoric and has been convinced by the positive feedback loop that they will win.  And there are times when the payout on that bet is much higher, when everyone else is extremely fearful and is convinced it will not pay off. 


This is a time to be good rationalists, and investigate a possible opportunity, comparing the present situation to historical examples, and making an informed decision.   Either Bitcoin has begun the process of dying, and this decline will continue in stages until it hits zero (or some incredibly low value that is essentially the same for our purposes), or it will live.  Based on the new all time high being hit in number of transactions, and ways to spend Bitcoin, I think there is at least a reasonable chance it will live.  Enough of a chance that it is worth taking some money that you can 100% afford to lose, and making a bet.  A rational gamble that there is a decent probability that it will survive, at a time when a large number of others are betting that it will fail.


And then once you do that, try your hardest to mentally write it off as a complete loss, like you had blown the money on a vacation or a consumer good, and now it is gone, and then wait a long time.



Less exploitable value-updating agent

5 Stuart_Armstrong 13 January 2015 05:19PM

My indifferent value learning agent design is in some ways too good. The agent transfer perfectly from u maximisers to v maximisers - but this makes them exploitable, as Benja has pointed out.

For instance, if u values paperclips and v values staples, and everyone knows that the agent will soon transfer from a u-maximiser to a v-maximiser, then an enterprising trader can sell the agent paperclips in exchange for staples, then wait for the utility change, and sell the agent back staples for paperclips, pocketing a profit each time. More prosaically, they could "borrow" £1,000,000 from the agent, promising to pay back £2,000,000 tomorrow if the agent is still a u-maximiser. And the currently u-maximising agent will accept, even though everyone knows it will change to a v-maximiser before tomorrow.

One could argue that exploitability is inevitable, given the change in utility functions. And I haven't yet found any principled way of avoiding exploitability which preserves the indifference. But here is a tantalising quasi-example.

As before, u values paperclips and v values staples. Both are defined in terms of extra paperclips/staples over those existing in the world (and negatively in terms of destruction of existing/staples), with their zero being at the current situation. Let's put some diminishing returns on both utilities: for each paperclips/stables created/destroyed up to the first five, u/v will gain/lose one utilon. For each subsequent paperclip/staple destroyed above five, they will gain/lose one half utilon.

We now construct our world and our agent. The world lasts two days, and has a machine that can create or destroy paperclips and staples for the cost of £1 apiece. Assume there is a tiny ε chance that the machine stops working at any given time. This ε will be ignored in all calculations; it's there only to make the agent act sooner rather than later when the choices are equivalent (a discount rate could serve the same purpose).

The agent owns £10 and has utility function u+Xv. The value of X is unknown to the agent: it is either +1 or -1, with 50% probability, and this will be revealed at the end of the first day (you can imagine X is the output of some slow computation, or is written on the underside of a rock that will be lifted).

So what will the agent do? It's easy to see that it can never get more than 10 utilons, as each £1 generates at most 1 utilon (we really need a unit symbol for the utilon!). And it can achieve this: it will spend £5 immediately, creating 5 paperclips, wait until X is revealed, and spend another £5 creating or destroying staples (depending on the value of X).

This looks a lot like a resource-conserving value-learning agent. I doesn't seem to be "exploitable" in the sense Benja demonstrated. It will still accept some odd deals - one extra paperclip on the first day in exchange for all the staples in the world being destroyed, for instance. But it won't give away resources for no advantage. And it's not a perfect value-learning agent. But it still seems to have interesting features of non-exploitable and value-learning that are worth exploring.

Note that this property does not depend on v being symmetric around staple creation and destruction. Assume v hits diminishing returns after creating 5 staples, but after destroying only 4 of them. Then the agent will have the same behaviour as above (in that specific situation; in general, this will cause a slight change, in that the agent will slightly overvalue having money on the first day compared to the original v), and will expect to get 9.75 utilons (50% chance of 10 for X=+1, 50% chance of 9.5 for X=-1). Other changes to u and v will shift how much money is spent on different days, but the symmetry of v is not what is powering this example.

'Dumb' AI observes and manipulates controllers

33 Stuart_Armstrong 13 January 2015 01:35PM

The argument that AIs provided with a reward channel will observe their controllers and learn to manipulate them is a valid one. Unfortunately, it's often framed in a way that feels counterintuitive or extreme, especially to AI designers. It typically starts with the standard reinforcement learning scenario, then posits that the AI becomes superintelligent and either manipulates the controller with super-social powers, or breaks out and gains control of its reward channel, killing or threatening its controllers.

And that is a fair argument. But conceptually, it leaps from a standard reinforcement learning scenario, to a science-fiction-sounding scenario. It might help to have intermediate scenarios: to show that even lower intelligence AIs might start exhibiting the same sort of behaviour, long before it gets to superintelligence.

So consider the following scenario. Some complex, trainable AI is tasked with writing automated news stories for a student newspaper. It trawls the web and composes its stories, then gets reward and feedback from the editors. Assume there are two editors for this newspaper, and they work on alternate days. The two editors have somewhat different ideas as to what constitutes a good story, so their feedbacks are different. After a while, the AI finds that it gets higher reward by using a certain style on Monday, Wednesday and Friday, and another style on Tuesday and Thursdays - this is a simple consequence of its reward mechanism.

But the rota isn't perfect. Sometime the Monday editor will edit a story so late on Monday that it's Tuesday, and sometimes the Tuesday editor will be up early and edit a story at the same time. This pattern is more common during the holidays, when the editors go to different time zones, often travel, and become more irregular in their editing.

So if the AI complex and skilled enough, then, simply through simple feedback, it will start building up a picture of its editors. It will figure out when they are likely to stick to a schedule, and when they will be more irregular. It will figure out the difference between holidays and non-holidays. Given time, it may be able to track the editors moods and it will certainly pick up on any major change in their lives - such as romantic relationships and breakups, which will radically change whether and how it should present stories with a romantic focus.

It will also likely learn the correlation between stories and feedbacks - maybe presenting a story define roughly as "positive" will increase subsequent reward for the rest of the day, on all stories. Or maybe this will only work on a certain editor, or only early in the term. Or only before lunch.

Thus the simple trainable AI with a particular focus - write automated news stories - will be trained, through feedback, to learn about its editors/controllers, to distinguish them, to get to know them, and, in effect, to manipulate them.

This may be a useful "bridging example" between standard RL agents and the superintelligent machines.

LW-ish meetup in Boulder, CO

5 fowlertm 13 January 2015 05:23AM

This Saturday I'm giving a presentation at the Boulder Future Salon, topic will be non-religious spirituality. The more LWians that can make it the better, because I'm really trying to get some community building done in the Boulder/Denver area. There's an insane amount of potential here.


Superintelligence 18: Life in an algorithmic economy

3 KatjaGrace 13 January 2015 02:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.

Welcome. This week we discuss the eighteenth section in the reading guideLife in an algorithmic economy. This corresponds to the middle of Chapter 11.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Life in an algorithmic economy” from Chapter 11


  1. In a multipolar scenario, biological humans might lead poor and meager lives. (p166-7)
  2. The AIs might be worthy of moral consideration, and if so their wellbeing might be more important than that of the relatively few humans. (p167)
  3. AI minds might be much like slaves, even if they are not literally. They may be selected for liking this. (p167)
  4. Because brain emulations would be very cheap to copy, it will often be convenient to make a copy and then later turn it off (in a sense killing a person). (p168)
  5. There are various other reasons that very short lives might be optimal for some applications. (p168-9)
  6. It isn't obvious whether brain emulations would be happy working all of the time. Some relevant considerations are current human emotions in general and regarding work, probable selection for pro-work individuals, evolutionary adaptiveness of happiness in the past and future -- e.g. does happiness help you work harder?--and absence of present sources of unhappiness such as injury. (p169-171)
  7. In the long run, artificial minds may not even be conscious, or have valuable experiences, if these are not the most effective ways for them to earn wages. If such minds replace humans, Earth might have an advanced civilization with nobody there to benefit. (p172-3)
  8. In the long run, artificial minds may outsource many parts of their thinking, thus becoming decreasingly differentiated as individuals. (p172)
  9. Evolution does not imply positive progress. Even those good things that evolved in the past may not withstand evolutionary selection in a new circumstance. (p174-6)

Another view

Robin Hanson on others' hasty distaste for a future of emulations: 

Parents sometimes disown their children, on the grounds that those children have betrayed key parental values. And if parents have the sort of values that kids could deeply betray, then it does make sense for parents to watch out for such betrayal, ready to go to extremes like disowning in response.

But surely parents who feel inclined to disown their kids should be encouraged to study their kids carefully before making such a choice. For example, parents considering whether to disown their child for refusing to fight a war for their nation, or for working for a cigarette manufacturer, should wonder to what extend national patriotism or anti-smoking really are core values, as opposed to being mere revisable opinions they collected at one point in support of other more-core values. Such parents would be wise to study the lives and opinions of their children in some detail before choosing to disown them.

I’d like people to think similarly about my attempts to analyze likely futures. The lives of our descendants in the next great era after this our industry era may be as different from ours’ as ours’ are from farmers’, or farmers’ are from foragers’. When they have lived as neighbors, foragers have often strongly criticized farmer culture, as farmers have often strongly criticized industry culture. Surely many have been tempted to disown any descendants who adopted such despised new ways. And while such disowning might hold them true to core values, if asked we would advise them to consider the lives and views of such descendants carefully, in some detail, before choosing to disown.

Similarly, many who live industry era lives and share industry era values, may be disturbed to see forecasts of descendants with life styles that appear to reject many values they hold dear. Such people may be tempted to reject such outcomes, and to fight to prevent them, perhaps preferring a continuation of our industry era to the arrival of such a very different era, even if that era would contain far more creatures who consider their lives worth living, and be far better able to prevent the extinction of Earth civilization. And such people may be correct that such a rejection and battle holds them true to their core values.

But I advise such people to first try hard to see this new era in some detail from the point of view of its typical residents. See what they enjoy and what fills them with pride, and listen to their criticisms of your era and values. I hope that my future analysis can assist such soul-searching examination. If after studying such detail, you still feel compelled to disown your likely descendants, I cannot confidently say you are wrong. My job, first and foremost, is to help you see them clearly.

More on whose lives are worth living here and here.


1. Robin Hanson is probably the foremost researcher on what the finer details of an economy of emulated human minds would be like. For instance, which company employees would run how fast, how big cities would be, whether people would hang out with their copies. See a TEDx talk, and writings hereherehere and here (some overlap - sorry). He is also writing a book on the subject, which you can read early if you ask him. 

2. Bostrom says,

Life for biological humans in a post-transition Malthusian state need not resemble any of the historical states of man...the majority of humans in this scenario might be idle rentiers who eke out a marginal living on their savings. They would be very poor, yet derive what little income they have from savings or state subsidies. They would live in a world with  extremely advanced technology, including not only superintelligent machines but also anti-aging medicine, virtual reality, and various enhancement technologies and pleasure drugs: yet these might be generally unaffordable....(p166)

It's true this might happen, but it doesn't seem like an especially likely scenario to me. As Bostrom has pointed out in various places earlier, biological humans would do quite well if they have some investments in capital, do not have too much of their property stolen or artfully manouvered away from them, and do not undergo too massive population growth themselves. These risks don't seem so large to me.

3. Paul Christiano has an interesting article on capital accumulation in a world of machine intelligence.

4. In discussing worlds of brain emulations, we often talk about selecting people for having various characteristics - for instance, being extremely productive, hard-working, not minding frequent 'death', being willing to work for free and donate any proceeds to their employer (p167-8). However there are only so many humans to select from, so we can't necessarily select for all the characteristics we might want. Bostrom also talks of using other motivation selection methods, and modifying code, but it is interesting to ask how far you could get using only selection. It is not obvious to what extent one could meaningfully modify brain emulation code initially. 

I'd guess less than one in a thousand people would be willing to donate everything to their employer, given a random employer. This means to get this characteristic, you would have to lose a factor of 1000 on selecting for other traits. All together you have about 33 bits of selection power in the present world (that is, 7 billion is about 2^33; you can divide the world in half about 33 times before you get to a single person). Lets suppose you use 5 bits in getting someone who both doesn't mind their copies dying (I guess 1 bit, or half of people) and who is willing to work an 80h/week (I guess 4 bits, or one in sixteen people). Lets suppose you are using the rest of your selection (28 bits) on intelligence, for the sake of argument. You are getting a person of IQ 186. If instead you use 10 bits (2^10 = ~1000) on getting someone to donate all their money to their employer, you can only use 18 bits on intelligence, getting a person of IQ 167. Would it not often be better to have the worker who is twenty IQ points smarter and pay them above subsistance?

5. A variety of valuable uses for cheap to copy, short-lived brain emulations are discussed in Whole brain emulation and the evolution of superorganisms, LessWrong discussion on the impact of whole brain emulation, and Robin's work cited above.

6. Anders Sandberg writes about moral implications of emulations of animals and humans.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn’t require much understanding of human cognitive neuroscience at the computational level, as described in Sandberg & Bostrom (2008), or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described by (e.g.) Ken Hayworth, or is it likely to be (3) something else?
  2. Extend and update our understanding of when brain emulations might appear (see Sandberg & Bostrom (2008)).
  3. Investigate the likelihood of a multipolar outcome?
  4. Follow Robin Hanson (see above) in working out the social implications of an emulation scenario
  5. What kinds of responses to the default low-regulation multipolar outcome outlined in this section are likely to be made? e.g. is any strong regulation likely to emerge that avoids the features detailed in the current section?
  6. What measures are useful for ensuring good multipolar outcomes?
  7. What qualitatively different kinds of multipolar outcomes might we expect? e.g. brain emulation outcomes are one class.
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about the possibility of a multipolar outcome turning into a singleton later. To prepare, read “Post-transition formation of a singleton?” from Chapter 11The discussion will go live at 6pm Pacific time next Monday 19 January. Sign up to be notified here.

Ethical Diets

1 pcm 12 January 2015 11:38PM

[Cross-posted from my blog.]

I've seen some discussion of whether effective altruists have an obligation to be vegan or vegetarian.

The carnivores appear to underestimate the long-term effects of their actions. I see a nontrivial chance that we're headed toward a society in which humans are less powerful than some other group of agents. This could result from slow AGI takeoff producing a heterogeneous society of superhuman agents. Or there could be a long period in which the world is dominated by ems before de novo AGI becomes possible. Establishing ethical (and maybe legal) rules that protect less powerful agents may influence how AGIs treat humans or how high-speed ems treat low-speed ems and biological humans [0]. A one in a billion chance that I can alter this would be worth some of my attention. There are probably other similar ways that an expanding circle of ethical concern can benefit future people.

I see very real costs to adopting an ethical diet, but it seems implausible that EAs are merely choosing alternate ways of being altruistic. How much does it cost MealSquares customers to occasionally bemoan MealSquares use of products from apparently factory-farmed animals? Instead, it seems like EAs have some tendency to actively raise the status of MealSquares [1].

I don't find it useful to compare a more ethical diet to GiveWell donations for my personal choices, because I expect my costs to be mostly inconveniences, and the marginal value of my time seems small [2], with little fungibility between them.

I'm reluctant to adopt a vegan diet due to the difficulty of evaluating the health effects and due to the difficulty of evaluating whether it would mean fewer animals living lives that they'd prefer to nonexistence.

But there's little dispute that most factory-farmed animals are much less happy than pasture-raised animals. And everything I know about the nutritional differences suggests that avoiding factory-farmed animals improves my health [3].

I plan not to worry about factory-farmed invertebrates for now (shrimp, oysters, insects), partly because some of the harmful factory-farm practices such as confining animals to cages not much bigger than the animals in question aren't likely with animals that small.

So my diet will consist of vegan food plus shellfish, insects, wild-caught fish, pasture-raised birds/mammals (and their eggs/whey/butter). I will assume vertebrate animals are raised in cruel conditions unless they're clearly marked as wild-caught, grass-fed, or pasture-raised [4].

I've made enough changes to my diet for health reasons that this won't require large changes. I already eat at home mostly, and the biggest change to that part of my diet will involve replacing QuestBars with a home-made version using whey protein from grass-fed cows (my experiments so far indicate it's inconvenient and hard to get a decent texture). I also have some uncertainty about pork belly [5] - the pasture-raised version I've tried didn't seem as good, but that might be because I didn't know it needed to be sliced very thin.

My main concern is large social gatherings. It has taken me a good deal of willpower to stick to a healthy diet under those conditions, and I expect it to take more willpower to observe ethical constraints.

A 100% pure diet would be much harder for me to achieve than an almost pure diet, and it takes some time for me to shift my habits. So for this year I plan to estimate how many calories I eat that don't fit this diet, and aim to keep that less than 120 calories per month (about 0.2%) [6]. I'll re-examine the specifics of this plan next Jan 1.

Does anyone know a convenient name for my planned diet?




0. With no one agent able to conquer the world, it's costly for a single agent to repudiate an existing rule. A homogeneous group of superhuman agents might coordinate to overcome this, but with heterogeneous agents the coordination costs may matter.

1. I bought 3 orders of MealSquares, but have stopped buying for now. If they sell a version whose animal products are ethically produced (which I'm guessing would cost $50/order more), I'll resume buying them occasionally.

2. The average financial value of my time is unusually high, but I often have trouble estimating whether spending more time earning money has positive or negative financial results. I expect financial concerns will be more important to many people.

3. With the probable exception of factory-farmed insects, oysters, and maybe other shellfish.

4. In most restaurants, this will limit me to vegan food and shellfish.

5. Pork belly is unsliced bacon without the harm caused by smoking.

6. Yes, I'll have some incentive to fudge those estimates. My experience from tracking food for health reasons suggests possible errors of 25%. That's not too bad compared to other risks such as lack of willpower.

Apptimize -- rationalist startup hiring engineers

64 nancyhua 12 January 2015 08:22PM

Apptimize is a 2-year old startup closely connected with the rationalist community, one of the first founded by CFAR alumni.  We make “lean” possible for mobile apps -- our software lets mobile developers update or A/B test their apps in minutes, without submitting to the App Store. Our customers include big companies such as Nook and Ebay, as well as Top 10 apps such as Flipagram. When companies evaluate our product against competitors, they’ve chosen us every time.

We work incredibly hard, and we’re striving to build the strongest engineering team in the Bay Area. If you’re a good developer, we have a lot to offer.


  • Our team of 14 includes 7 MIT alumni, 3 ex-Googlers, 1 Wharton MBA, 1 CMU CS alum, 1 Stanford alum, 2 MIT Masters, 1 MIT Ph. D. candidate, and 1 “20 Under 20” Thiel Fellow. Our CEO was also just named to the Forbes “30 Under 30

  • David Salamon, Anna Salamon’s brother, built much of our early product

  • Our CEO is Nancy Hua, while our Android lead is "20 under 20" Thiel Fellow James Koppel. They met after James spoke at the Singularity Summit

  • HP:MoR is required reading for the entire company

  • We evaluate candidates on curiosity even before evaluating them technically

  • Seriously, our team is badass. Just look

Self Improvement

  • You will have huge autonomy and ownership over your part of the product. You can set up new infrastructure and tools, expense business products and services, and even subcontract some of your tasks if you think it's a good idea

  • You will learn to be a more goal-driven agent, and understand the impact of everything you do on the rest of the business

  • Access to our library of over 50 books and audiobooks, and the freedom to purchase more

  • Everyone shares insights they’ve had every week

  • Self-improvement is so important to us that we only hire people committed to it. When we say that it’s a company value, we mean it

The Job

  • Our mobile engineers dive into the dark, undocumented corners of iOS and Android, while our backend crunches data from billions of requests per day

  • Engineers get giant monitors, a top-of-the-line MacBook pro, and we’ll pay for whatever else is needed to get the job done

  • We don’t demand prior experience, but we do demand the fearlessness to jump outside your comfort zone and job description. That said, our website uses AngularJS, jQuery, and nginx, while our backend uses AWS, Java (the good parts), and PostgreSQL

  • We don’t have gratuitous perks, but we have what counts: Free snacks and catered meals, an excellent health and dental plan, and free membership to a gym across the street

  • Seriously, working here is awesome. As one engineer puts it, “we’re like a family bent on taking over the world”

If you’re interested, send some Bayesian evidence that you’re a good match to jobs@apptimize.com

What topics are appropriate for LessWrong?

7 tog 12 January 2015 06:58PM

For example, what would be inappropriately off topic to post to LessWrong discussion about?

I couldn't find an answer in the FAQ. (Perhaps it'd be worth adding one.) The closest I could find was this:

What is Less Wrong?

Less Wrong is an online community for discussion of rationality. Topics of interest include decision theory, philosophy, self-improvement, cognitive science, psychology, artificial intelligence, game theory, metamathematics, logic, evolutionary psychology, economics, and the far future.

However "rationality" can be interpreted broadly enough that rational discussion of anything would count, and my experience reading LW is compatible with this interpretation being applied by posters. Indeed my experience seems to suggest that practically everything is on topic; political discussion of certain sorts is frowned upon, but not due to being off topic. People often post about things far removed from the topics of interest. And some of these topics are very broad: it seems that a lot of material about self-improvement is acceptable, for instance.

Misapplied economics and overwrought estimates

2 erratim 12 January 2015 05:10PM

I believe that a small piece of rationalist community doctrine is incorrect, and I'd like your help correcting it (or me). Arguing the point by intuition has largely failed, so here I make the case by leaning heavily on the authority of conventional economic wisdom.

The question:

How does an industry's total output respond to decreases in a consumer's purchases; does it shrink by a similar amount, a lesser amount, or not at all?

(Short-run) Answers from the rationalist community:

The consensus answer in the few cases I've seen cited in the broader LW community appears to be that production is reduced by an amount that's smaller than the original decrease in consumption.

Animal Charity Evaluators (ACE):

Fewer people in the market for meat leads to a drop in prices, which causes some other people to buy more meat. The drop in prices does also reduce the amount of meat produced and ultimately consumed, but not by as much as was consumed by people who have left the market.

Peter Hurford:

As is commonly known by economists, when you choose to not buy a product, you lower the demand ever so slightly, which lowers the price ever so slightly, which turns out to re-increase the demand ever so slightly. Therefore, forgoing one pound of meat means that less than one pound of meat actually gets prevented from being factory farmed.

Compassion, by the Pound:

The key points to note are that a permanent decision to reduce meat consumption (1) does ultimately reduce the number of animals on the farm and the amount of meat produced (2), but it has less than a 1-to-1 effect on the amount of meat produced. 

These answers are all correct in the short-run (ie, when the “supply curve” doesn’t have time to shift). If there is less demand for a product, the price will fall, and some other consumers will consume more because of the better deal. One intuitive justification for this is that when producers don’t have time to fully react to a change in demand, the total amount of production and consumption is somewhat ‘anchored’ to prior expectations of demand, so any change in demand will have less than a 1:1 effect on production.

For example, a chicken producer who begins to have negative profits due to the drop in price isn't going to immediately yank their chickens from the shelves; they will sell what they've already produced, and maybe even finish raising the chickens they've already invested in (if the remaining marginal cost is less than the expected sale price), even if they plan to shut down soon.

(Long-run) Answers from neoclassical economics:

In the long-run, however, the chicken producer has time to shrink or shut down the money-losing operation, which reduces the number of chickens on the market (shifts the "supply curve" to the left). The price rises again and the consumers that were only eating chicken because of the sale prices return to other food sources.

As a couple of online economics resources put it:


The long-run market equilibrium is conformed of successive short-run equilibrium points. The supply curve in the long run will be totally elastic as a result of the flexibility derived from the factors of production and the free entry and exit of firms.



The increase in demand causes the equilibrium price of zucchinis [to] increase... and the equilibrium quantity [to] rise... The higher price and larger quantity is achieved as each existing firm in the industry responds to the demand shock.

However, the higher price leads to above-normal economic profit for existing firms. And with freedom of entry and exit, economic profit attracts kumquat, cucumber, and carrot producers into this zucchini industry. An increase in the number of firms in the zucchini industry then causes the market supply curve to shift. How far this curve shifts and where it intersects the new demand curve... determines if the zucchini market is an increasing-cost, decreasing-cost, [or] constant-cost industry.

Constant-Cost Industry: An industry with a horizontal long-run industry supply curve that results because expansion of the industry causes no change in production cost or resource prices. A constant-cost industry occurs because the entry of new firms, prompted by an increase in demand, does not affect the long-run average cost curve of individual firms, which means the minimum efficient scale of production does not change.

[I left out the similar explanations of the increasing- and decreasing-cost cases from the quote above.]

In other words, while certain market characteristics (increasing-cost industries) would lead us to expect that production will fall by less than consumption in the long-run, it could also fall by an equal amount, or even more.

Short-run versus long-run

Economists define the long-run as a scope of time in which producers and consumers have time to react to market dynamics. As such, a change in the market (e.g. reduction in demand) can have one effect in the short-run (reduced price), and a different effect in the long-run (reduced, constant, or increased price). In the real world, there will be many changes to the market in the short-run before the long-run has a chance to react to to any one of them; but we should still expect it to react to the net effect of all of them eventually.

Why do economists even bother measuring short-run dynamics (such as short-run elasticity estimates) on industries if they know that a longer view will render them obsolete? Probably because the demand for such research comes from producers who have to react to the short-run. Producers can't just wait for the long-run to come true; they actively realize it by reacting to short-run changes (otherwise the market would be 'stuck' in the short-run equilibrium).

So if we care about long-run effects, but we don't have any data to know whether the industries and increasing-cost, constant-cost, or decreasing-cost, what prior should we use for our estimates? Basic intuition suggests we should assume an industry is constant-cost in the absence of industry-specific evidence. The rationalist-cited pieces I quoted above are welcome to make an argument that animal industries in particular are increasing-cost, but they haven't done that yet, or even acknowledged that the opposite is also possible.

Are there broader lessons to learn?

Have we really been messing up our cost-effectiveness estimates simply by confusing the short-run and long-run in economics data? If so, why haven't we noticed it before?

I'm not sure. But I wouldn't be surprised if one issue is, in the process of trying to create precise cost-effectiveness-style estimates it's tempting to use data simply because it's there.

How can we identify and prevent this bias in other estimates? Perhaps we should treat quantitative estimates as chains that are no stronger than their weakest link. If you're tempted to build a chain with a particularly weak link, consider if there's a way to build a similar chain without it (possibly gaining robustness at the cost of artificial precision or completeness) or whether chain-logic is even appropriate for the purpose.

For example, perhaps it should have raised flags that ACE's estimates for the above effect on broiler chicken production (which they call "cumulative elasticity factor" or CEF) ranged by more than a factor of 10x, adding almost as much uncertainty to the final calculation for broiler chickens as the 5 other factors combined. (To be fair, the CEF estimates of the other animal products were not as lopsided.)

Open thread, Jan. 12 - Jan. 18, 2015

6 Gondolinian 12 January 2015 12:39AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Previous Open Thread

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Research Priorities for Artificial Intelligence: An Open Letter

23 jimrandomh 11 January 2015 07:52PM

The Future of Life Institute has published their document Research priorities for robust and beneficial artificial intelligence and written an open letter for people to sign indicating their support.

Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to research how to maximize these benefits while avoiding potential pitfalls. This document gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.


The guardian article on longevity research [link]

8 ike 11 January 2015 07:02PM

Who are your favorite "hidden rationalists"?

17 aarongertler 11 January 2015 06:26AM

Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.

I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.

So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!

Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?


Here's my list, to kick things off:


  • Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
  • Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
  • Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life

Finally, I'll mention something many more people are probably aware of: I Am A, where people with interesting lives and experiences answer questions about those things. Few sites are better for broadening one's horizons; lots of concentrated honesty. Plus, the chance to update on beliefs you didn't even know you had.

Once more: Who are the people/sources who give you the same feeling you get when you read your favorite LW posts, but who many of us probably haven't heard of?


How Islamic terrorists reduced terrorism in the US

13 PhilGoetz 11 January 2015 05:19AM

Yesterday I was using the Global Terrorism Database to check some suprisingly low figures on what percentage of terrorist acts are committed by Muslims. (Short answer: Worldwide since 2000, about 80%, rather than 0.4 - 6% as given in various sources.) But I found some odd patterns in the data for the United States. Look at this chart of terrorist acts in the US which meet GTD criteria I-III and are listed as "unambiguous":

There were over 200 bombings in the US in 1970 alone, by all sorts of political groups (the Puerto Rican Liberation Front, the Jewish Defense League, the Weathermen, the Black Panthers, anti-Castro groups, white supremacists, etc., etc.) There was essentially no religious terrorism; that came in the 80s and 90s. But let's zoom in on 1978 onward, after the crazy period we inaccurately call "the sixties". First, a count of Islamic terrorist acts worldwide:

Islamic terrorist acts worldwide
This is incomplete, because the database contains over 400 Islamic terrorist groups, but only let me select 300 groups at a time. (Al Qaeda is one of the groups not included here.) Also, this doesn't list any acts committed without direct supervision from a recognized terrorist group, nor acts whose perpetrators were not identified (about 77% of the database, estimated from a sample of 100, with the vast majority of those unknowns in Muslim countries). But we can see there's an increase after 2000.

Now let's look at terrorist acts of all kinds in the US:

Terrorist acts in the US, 1970-2013

We see a dramatic drop in terrorist acts in the US after 2000. Sampling them, I found that except for less than a handful of white supremacists, there are only 3 types of terrorists still active in the US: Nutcases, animal liberation activists, and Muslims. If we exclude cases of property damage (which has never terrified me), it's basically just nutcases and Muslims.

Going by body count, it may still be an increase, because even if you exclude 9/11, just a handful of Muslim attacks still accounted for 50% of US fatalities in terrorist attacks from 2000 through 2013. But counting incidents, by 2005 there were about 1/3 as many per year as just before 2000. From 2000 to 2013 there were only 6 violent terrorist attacks in the US by non-Islamic terrorist groups that were not directed solely at property damage, resulting in 2 fatalities over those 14 years. Violent non-Islamic organized terrorism in the US has been effectively eliminated.

Some of this reduction is because we've massively expanded our counter-terrorism agencies. But if that were the explanation, given that homeland security doesn't stop all of the Islamic attacks they're focused on, surely we would see more than 6 attacks by other groups in 14 years.

Much of the reduction might be for non-obvious reasons, like whatever happened around 1980. But I think the most-obvious hypothesis is that Islamic terrorists gave terrorism a bad name. In the sixties, terrorism was almost cool. You could conceivably get laid by blowing up an Army recruiting center. Now, though, there's such a stigma associated with terrorism that even the Ku Klux Klan doesn't want to be associated with it. Islamists made terrorism un-American. In doing so, they reduced the total incidence of terrorism in America. Talk about unintended consequences.

On a completely different note, I couldn't help but notice one other glaring thing in the US data: terrorist acts attributed to "Individual" (a lone terrorist not part of an organization). I checked 200 cases from other countries and did not find one case tagged "Individual". But half of all attributed cases in the US from 2000-2013 are tagged "Individual". The lone gunman thing, where someone flips out and shoots up a Navy base, or bombs a government building because of a conspiracy theory, is distinctively American.

Perhaps Americans really are more enterprising than people of other nations. Perhaps other countries can't do the detective work to attribute acts to individuals. Perhaps their rate of non-lone wolf terrorism is so high that the lone wolf terrorists disappear in the data. Perhaps we're more accepting of "defending our freedom" as an excuse for shooting people. Perhaps psychotic delusions of being oppressed don't thrive well in countries that have plenty of highly-visible oppression. But perhaps Americans really do have a staggeringly-higher rate of mental illness than everyone else in the world. (Yes, suspicious study is suspicious, but... it is possible.)

Productivity poll: how frequently do you think you *should* check email?

3 tog 10 January 2015 04:36PM

How frequently do you think you *should* check email? You can also say how frequently you do in comments.

Personally I'm sold on thinking you should check it around once a day, not necessarily without fail. That increases focus on both email and non-email, and minimises getting sucked into distractions. But some people I know disagree. Some believe in getting notifications whenever a new email comes in.

For anyone who'd like to check email less often and uses GMail, I recommend using http://inboxpause.com/ and this full screen compose link: https://mail.google.com/mail/u/0/?ui=2&view=cm&fs=1&tf=1&shva=1

Edited to add: I'd recommend everyone at least try checking only once a day, at least for a few days, to see if you find it more productive and/or relaxing. That'd be a big enough win to make experimenting worthwhile.

View more: Next