Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Be comfortable with hypocrisy

31 The_Duck 08 April 2014 10:03AM

Neal Stephenson's The Diamond Age takes place several decades in the future and this conversation is looking back on the present day:

"You know, when I was a young man, hypocrisy was deemed the worst of vices,” Finkle-McGraw said. “It was all because of moral relativism. You see, in that sort of a climate, you are not allowed to criticise others-after all, if there is no absolute right and wrong, then what grounds is there for criticism?" [...]

"Now, this led to a good deal of general frustration, for people are naturally censorious and love nothing better than to criticise others’ shortcomings. And so it was that they seized on hypocrisy and elevated it from a ubiquitous peccadillo into the monarch of all vices. For, you see, even if there is no right and wrong, you can find grounds to criticise another person by contrasting what he has espoused with what he has actually done. In this case, you are not making any judgment whatsoever as to the correctness of his views or the morality of his behaviour-you are merely pointing out that he has said one thing and done another. Virtually all political discourse in the days of my youth was devoted to the ferreting out of hypocrisy." [...]

"We take a somewhat different view of hypocrisy," Finkle-McGraw continued. "In the late-twentieth-century Weltanschauung, a hypocrite was someone who espoused high moral views as part of a planned campaign of deception-he never held these beliefs sincerely and routinely violated them in privacy. Of course, most hypocrites are not like that. Most of the time it's a spirit-is-willing, flesh-is-weak sort of thing."

"That we occasionally violate our own stated moral code," Major Napier said, working it through, "does not imply that we are insincere in espousing that code."

I'm not sure if I agree with this characterization of the current political climate; in any case, that's not the point I'm interested in. I'm also not interested in moral relativism.

But the passage does point out a flaw which I recognize in myself: a preference for consistency over actually doing the right thing. I place a lot of stock--as I think many here do--on self-consistency. After all, clearly any moral code which is inconsistent is wrong. But dismissing a moral code for inconsistency or a person for hypocrisy is lazy. Morality is hard. It's easy to get a warm glow from the nice self-consistency of your own principles and mistake this for actually being right.

Placing too much emphasis on consistency led me to at least one embarrassing failure. I decided that no one who ate meat could be taken seriously when discussing animal rights: killing animals because they taste good seems completely inconsistent with placing any value on their lives. Furthermore, I myself ignored the whole concept of animal rights because I eat meat, so that it would be inconsistent for me to assign animals any rights. Consistency between my moral principles and my actions--not being a hypocrite--was more important to me than actually figuring out what the correct moral principles were. 

To generalize: holding high moral ideals is going to produce cognitive dissonance when you are not able to live up to those ideals. It is always tempting--for me at least--to resolve this dissonance by backing down from those high ideals. An alternative we might try is to be more comfortable with hypocrisy. 

 

Related: Self-deception: Hypocrisy or Akrasia?

Business Networking through LessWrong

28 JoshuaFox 02 April 2014 05:39PM

Is anyone interested in contacting other people in the LessWrong community to find a job, employee, business partner, co-founder, adviser, or investor?

Connections like this develop inside ethnic and religious groups, as well as  among university  alums or members of a fraternity. I think that LessWrong can provide  the same value.

For example, LessWrong must have plenty of skilled software developers in dull jobs, who would love to work with smart, agenty rationalists. Likewise, there must be some company founders or managers who are having a very hard time finding good software developers. 

A shared commitment to instrumental and epistemic rationality should be a good starting point, not to mentioned a shared memeplex to help break the ice. (Paperclips! MoR!)

Besides being  fun, working together with other rationalists could be a good business move.

As a side-benefit, it also has good potential to raise the sanity waterline and help further develop new rationality skills, both personally and as a community.

Naturally, such a connection is not guaranteed to produce results. But it's hard to find the right people to work with: Why not try this route? And although you can cold-contact someone you've seen online, you don't know who's interested in what you have to offer, so I think more effort is needed to bootstrap such networking.

I'd like to gauge interest. (Alexandros has volunteered to help.) If you might be interested in this sort of networking, please fill out this short Google Form [Edit: Survey closed as of April 15]. I'll post an update about what sort of response we get.

Privacy: Although the main purpose of this form is to gauge interest, and other details may be needed to form good connections,  the info might be enough to get some contacts going. So, we might use this information to personally connect people. We won't share the info or build any online group with it. If we get a lot of interest we may later create some sort of online mechanism, but we’ll be sure to get your permission before adding you.

---------

Edit April 6: We're still seeing that people are  filling out the form, so we'll wait a week or two, and report on results.

---------

Edit April 15: See some comments on the results at this comment, below.

 

Two arguments for not thinking about ethics (too much)

28 Kaj_Sotala 27 March 2014 02:15PM

I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.

I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.

1: Little expected insight

This seems like a relatively straightforward inference from all the discussion we've had about complexity of value and the limits of introspection, so I'll be brief. I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities". Any introspective access we have into our minds is very limited, and at best, we can achieve an accurate characterization of the ethics endorsed by the most verbal/linguistic parts of our minds. (At least at the moment, future progress in moral psychology or neuroscience may eventually change this.) Because our morals are also derived from parts of our brains to which we don't have such access, our theories will unavoidably be incomplete. We are also prone to excessive rationalization when it comes to thinking about morality: see Joshua Greene and others for evidence suggesting that much of our verbal reasoning is actually just post-hoc rationalizations for underlying moral intuitions.

One could try to make the argument from Dutch Books and consistency, and argue that if we don't explicitly formulate our ethics and work out possible contradictions, we may end up doing things that work cross-purposes. E.g. maybe my morality says that X is good, but I don't realize this and therefore end up doing things that go against X. This is probably true to some extent, but I think that evaluating the effectiveness of various instrumental approaches (e.g. the kind of work that GiveWell is doing) is much more valuable for people who have at least a rough idea of what they want, and that the kinds of details that formal ethics focuses on (including many of the discussions on this site, such as this post of mine) are akin to trying to calculate something to the 6th digit of precision when our instruments only measure things at 3 digits of precision.

To summarize this point, I've increasingly come to think that living one's life according to the judgments of any formal ethical system gets it backwards - any such system is just a crude attempt of formalizing our various intuitions and desires, and they're mostly useless in determining what we should actually do. To the extent that the things that I do resemble the recommendations of utilitarianism (say), it's because my natural desires happen to align with utilitarianism's recommended courses of action, and if I say that I lean towards utilitarianism, it just means that utilitarianism produces the least recommendations that would conflict with what I would want to do anyway.

2: Leads to akrasia

Trying to follow the formal theories can be actively harmful towards pretty much any of the goals we have, because the theories and formalizations that the verbal parts of our minds find intellectually compelling are different from the ones that actually motivate us to action.

For example, Carl Shulman comments on why one shouldn't try to follow utilitarianism to the letter:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

Even if one avoided that particular failure mode, there remains the more general problem that very few people find it easy to be generally motivated by things like "what does this abstract ethical theory say I should do next". Rather, they are motivated by e.g. a sense of empathy and a desire to prevent others from suffering. But if we focus too much on constructing elaborate ethical theories, it becomes much too easy to start thinking excessively in terms of "what would this theory say I should do" and forget entirely about the original motivation that led us to formulate that theory. Then, because an abstract theory isn't intrinsically compelling in the same way that an emphatic concern over suffering is, we end up with a feeling of obligation that we should do something (e.g. some concrete action that would reduce the suffering of others), but not an actual intrinsic desire to really do it. Which leads to the kinds of action that are optimizing towards the goal of stop feeling that obligation, rather than the actual goal. This can manifest itself via things such as excessive procrastination. (See also this discussion of how "have-to" goals require willpower to accomplish, whereas "want-to" goals are done effortlessly.)

The following is an excerpt from Trying Not To Try by Edward Slingerland that makes the same point, discussing the example of an ancient king who thought himself selfish because he didn't care about his subjects, but who did care about his family, and who did spare the life of an ox when he couldn't face to see its distress as it was about to be slaughtered:

Mencius also suggests trying to expand the circle of concern by beginning with familial feelings. Focus on the respect you have for the elders in your family, he tells the king, and the desire you have to protect and care for your children. Strengthen these feelings by both reflecting on them and putting them into practice. Compassion starts at home. Then, once you’re good at this, try expanding this feeling to the old and young people in other families. We have to imagine the king is meant to start with the families of his closest peers, who are presumably easier to empathize with, and then work his way out to more and more distant people, until he finally finds himself able to respect and care for the commoners. “One who is able to extend his kindness in this way will be able to care for everyone in the world,” Mencius concludes, “while one who cannot will find himself unable to care for even his own wife and children. That in which the ancients greatly surpassed others was none other than this: they were good at extending their behavior, that is all.”

Mencian wu-wei cultivation is about feeling and imagination, not abstract reason or rational arguments, and he gets a lot of support on this from contemporary science. The fact that imaginative extension is more effective than abstract reasoning when it comes to changing people’s behavior is a direct consequence of the action-based nature of our embodied mind. There is a growing consensus, for instance, that human thought is grounded in, and structured by, our sensorimotor experience of the world. In other words, we think in images. This is not to say that we necessarily think in pictures. An “image” in this sense could be the feeling of what it’s like to lift a heavy object or to slog in a pair of boots through some thick mud. [...]

Here again, Mencius seems prescient. The Mohists, like their modern utilitarian cousins, think that good behavior is the result of digital thinking. Your disembodied mind reduces the goods in the world to numerical values, does the math, and then imposes the results onto the body, which itself contributes nothing to the process. Mencius, on the contrary, is arguing that changing your behavior is an analog process: education needs to be holistic, drawing upon your embodied experience, your emotions and perceptions, and employing imagistic reflection and extension as its main tools. Simply telling King Xuan of Qi that he ought to feel compassion for the common people doesn’t get you very far. It would be similarly ineffective to ask him to reason abstractly about the illogical nature of caring for an ox while neglecting real live humans who are suffering as a result of his misrule. The only way to change his behavior—to nudge his wu-wei tendencies in the right direction—is to lead him through some guided exercises. We are analog beings living in an analog world. We think in images, which means that both learning and teaching depend fundamentally on the power of our imagination.

In his popular work on cultivating happiness, Jonathan Haidt draws on the metaphor of a rider (the conscious mind) trying to work together with and tame an elephant (the embodied unconscious). The problem with purely rational models of moral education, he notes, is that they try to “take the rider off the elephant and train him to solve problems on his own,” through classroom instruction and abstract principles. They take the digital route, and the results are predictable: “The “class ends, the rider gets back on the elephant, and nothing changes at recess.” True moral education needs to be analog. Haidt brings this point home by noting that, as a philosophy major in college, he was rationally convinced by Peter Singer’s arguments for the moral superiority of vegetarianism. This cold conviction, however, had no impact on his actual behavior. What convinced Haidt to become a vegetarian (at least temporarily) was seeing a video of a slaughterhouse in action—his wu-wei tendencies could be shifted only by a powerful image, not by an irrefutable argument.

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.

How long will Alcor be around?

27 Froolow 17 April 2014 03:28PM

The Drake equation for cryonics is pretty simple: work out all the things that need to happen for cryonics to succeed one day, estimate the probability of each thing occurring independently, then multiply all those numbers together. Here’s one example of the breakdown from Robin Hanson. According to the 2013 LW survey, LW believes the average probability that cryonics will be successful for someone frozen today is 22.8% assuming no major global catastrophe. That seems startlingly high to me – I put the probability at at least two orders of magnitude lower. I decided to unpick some of the assumptions behind that estimate, particularly focussing on assumptions which I could model.

Every breakdown includes a component for ‘the probability that the company you freeze with goes bankrupt’ for obvious reasons. In fact, the probability of bankruptcy (and global catastrophe) are particularly interesting terms because they are the only terms which are ‘time dependant’ in the usual Drake equation. What I mean by this is that if you know your body will be frozen intact forever, then it doesn’t matter to you when effective unfreezing technology is developed (except to the extent you might have a preference to live in a particular time period). By contrast, if you know safe unfreezing techniques will definitely be developed one day it matters very much to you that it occurs sooner rather than later because if you unfreeze before the development of these techniques then they are totally wasted on you.

The probability of bankruptcy is also very interesting because – I naively assumed last week – we must have excellent historical data on the probability of bankruptcy given the size, age and market penetration of a given company. From this – I foolishly reasoned – we must be able to calculate the actual probability of the ‘bankruptcy’ component in the Cryo-Drake equation and slightly update our beliefs.

I began by searching for the expected lifespan of an average company and got two estimates which I thought would be a useful upper- and lower-bound. Startup companies have an average lifespan of four years. S&P 500 companies have an average lifespan of fifteen years. My logic here was that startups must be the most volatile kind of company, S&P 500 must be the least volatile and cryonics firms must be somewhere in the middle. Since the two sources only report the average lifespan, I modelled the average as a half-life. The results really surprised me; take a look at the following graph:

(http://imgur.com/CPoBN9u.jpg)

Even assuming cryonics firms are as well managed as S&P 500 companies, a 22.8% chance of success depends on every single other factor in the Drake equation being absolutely certain AND unfreezing technology being developed in 37 years.

But I noticed I was confused; Alcor has been around forty-ish years. Assuming it started life as a small company, the chance of that happening was one in ten thousand. That both Alcor AND The Cryonics Institute have been successfully freezing people for forty years seems literally beyond belief. I formed some possible hypotheses to explain this:

  1. Many cryo firms have been set up, and I only know about the successes (a kind of anthropic argument)
  2. Cryonics firms are unusually well-managed
  3. The data from one or both of my sources was wrong
  4. Modelling an average life expectancy as a half-life was wrong
  5. Some extremely unlikely event that is still more likely than the one in billion chance my model predicts – for example the BBC article is an April Fool’s joke that I don’t understand.

I’m pretty sure I can rule out 1; if many cryo firms were set up I’d expect to see four lasting twenty years and eight lasting ten years, but in fact we see one lasting about five years and two lasting indefinitely. We can also probably rule out 2; if cryo firms were demonstrably better managed than S&P 500 companies, the CEO of Alcor could go and run Microsoft and use the pay differential to support cryo research (if he was feeling altruistic). Since I can’t do anything about 5, I decided to focus my analysis on 3 and 4. In fact, I think 3 and 4 are both correct explanations; my source for the S&P 500 companies counted dropping out of the S&P 500 as a company ‘death’, when in fact you might drop out because you got taken over, because your industry became less important (but kept existing) or because other companies overtook you – your company can’t do anything about Facebook or Apple displacing them from the S&P 500, but Facebook and Apple don’t make you any more likely to fail. Additionally, modelling as a half-life must have been flawed; a company that has survived one hundred years and a company that has survived one year are not equally likely to collapse!

Consequently I searched Google Scholar for a proper academic source. I found one, but I should introduce the following caveats:

  1. It is UK data, so may not be comparable to the US (my understanding is that the US is a lot more forgiving of a business going bankrupt, so the UK businesses may liquidate slightly less frequently).
  2. It uses data from 1980. As well as being old data, there are specific reasons to believe that this time period overestimates the true survival of companies. For example, the mid-1980’s was an economic boom in the UK and 1980-1985 misses both major UK financial crashes of modern times (Black Wednesday and the Sub-Prime Crash). If the BBC is to be believed, the trend has been for companies to go bankrupt more and more frequently since the 1920’s.

I found it really shocking that this question was not better studied. Anyway, the key table that informed my model was this one, which unfortunately seems to break the website when I try to embed it. The source is Dunne, Paul, and Alan Hughes. "Age, size, growth and survival: UK companies in the 1980s." The Journal of Industrial Economics (1994): 115-140.

You see on the left the size of the company in 1980 (£1 in 1980 is worth about £2.5 now). On the top is the size of the company in 1985, with additional columns for ‘taken over’, ‘bankrupt’ or ‘other’. Even though a takeover might signal the end of a particular product line within a company, I have only counted bankruptcies as representing a threat to a frozen body; it is unlikely Alcor will be bought out by anyone unless they have an interest in cryonics.

The model is a Discrete Time Markov Chain analysis in five-year increments. What this means is that I start my hypothetical cryonics company at <£1m and then allow it to either grow or go bankrupt at the rate indicated in the article. After the first period I look at the new size of the company and allow it to grow, shrink or go bankrupt in accordance with the new probabilities. The only slightly confusing decision was what to do with takeovers. In the end I decided to ignore takeovers completely, and redistribute the probability mass they represented to all other survival scenarios.

The results are astonishingly different:

(http://imgur.com/CkQirYD.jpg)

Now your body can remain alive 415 years for a 22.8% chance of revival (assuming all other probabilities are certain). Perhaps more usefully, if you estimate the year you expect revival to occur you can read across the x axis to find the probability that your cryo company will still exist by then. For example in the OvercomingBias link above, Hanson estimates that this will occur in 2090, meaning he should probably assign something like a 0.65 chance to the probability his cryo company is still around.

Remember you don’t actually need to estimate the actual year YOUR revival will occur, but only the first year the first successful revival proves that cryogenically frozen bodies are ‘alive’ in a meaningful sense and therefore recieve protection under the law in case your company goes bankrupt. In fact, you could instead estimate the year Congress passes a ‘right to not-death’ law which would protect your body in the event of a bankruptcy even before routine unfreezing, or the year when brain-state scanning becomes advanced enough that it doesn’t matter what happens to your meatspace body because a copy of your brain exists on the internet.

My conclusion is that the survival of your cryonics firm is a lot more likely that the average person in the street thinks, but probably a lot less likely that you think if you are strongly into cryonics. This is probably not news to you; most of you will be aware of over-optimism bias, and have tried to correct for it. Hopefully these concrete numbers will be useful next time you consider the Cryo-Drake equation and the net present value of investing in cryonics.

The Problem of "Win-More"

26 katydee 26 March 2014 06:32PM

In Magic: the Gathering and other popular card games, advanced players have developed the notion of a "win-more" card. A "win-more" card is one that works very well, but only if you're already winning. In other words, it never helps turn a loss into a win, but it is very good at turning a win into a blowout. This type of card seems strong at first, but since these games usually do not use margin of victory scoring in tournaments, they end up being a trap-- instead of using cards that convert wins into blowouts, you want to use cards that convert losses into wins.

This concept is useful and important and you should never tell a new player about it, because it tends to make them worse at the game. Without a more experienced player's understanding of core concepts, it's easy to make mistakes and label cards that are actually good as being win-more.

This is an especially dangerous mistake to make because it's relatively uncommon for an outright bad card to seem like a win-more card; win-more cards are almost always cards that look really good at first. That means that if you end up being too wary of win-more cards, you're going to end up misclassifying good cards as bad, and that's an extremely dangerous mistake to make. Misclassifying bad cards as good is relatively easy to deal with, because you'll use them and see that they aren't good; misclassifying good cards as bad is much more dangerous, because you won't play them and therefore won't get the evidence you need to update your position.

I call this the "win-more problem." Concepts that suffer from the win-more problem are those that-- while certainly useful to an advanced user-- are misleading or net harmful to a less skillful person. Further, they are wrong or harmful in ways that are difficult to detect, because they screen off feedback loops that would otherwise allow someone to realize the mistake.

Gunshot victims to be suspended between life and death [link]

24 Dr_Manhattan 27 March 2014 04:33PM

http://www.newscientist.com/article/mg22129623.000-gunshot-victims-to-be-suspended-between-life-and-death.html?full=true

- First "official" program to practice suspended animation

- The article naturally goes on to ask whether longer SA (months, years) is possible 

- Amazing quote: "Every day at work I declare people dead. They have no signs of life, no heartbeat, no brain activity. I sign a piece of paper knowing in my heart that they are not actually dead. I could, right then and there, suspend them. But I have to put them in a body bag. It's frustrating to know there's a solution."

- IMO this if (I hope!) successful, will go a long way to bridge the emotional gap for cryonics

Items to Have In Case of Emergency...Maybe.

22 daenerys 03 April 2014 12:23AM

This post is inspired by a recent comment thread on my Facebook. I asked people to respond with whether or not they kept fire/lock boxes in their homes for their important documents (mainly to prove to a friend that this is a Thing People Do). It was pretty evenly divided, with slightly more people having them, than not. The interesting pattern I noticed was that almost ALL of my non-rationality community friends DID have them, and almost NONE of my rationality community friends did, and some hadn't even considered it.

This could be because getting a lock box is not an optimal use of time or money, OR it could be because rationalists often overlook the mundane household-y things more than the average person. I'm actually not certain which it is, so am writing this post presenting the case of why you should keep certain emergency items in the hope that either I'll get some interesting points for why you shouldn't prep that I haven't thought of yet, OR will get even better ideas in the comments.

General Case

Many LWers are concerned about x-risks that have a small chance of causing massive damage. We may or may not see this occur in our lifetime. However, there are small problems that occur every 2-3 years or so (extended blackout, being snowed in, etc), and there are mid-sized catastrophes that you might see a couple times in your life (blizzards, hurricanes, etc). It is likely that at least once in your life you will be snowed in your house and the pipes will burst or freeze (or whatever the local equivalent is, if you live in a warmer climate). Having the basic preparations ready for these occurrences is low cost (many minor emergencies require a similar set of preparations), and high payoff. 

Medicine and Hospitality

This category is so minor, you probably don't consider it to be "emergency", but it's still A Thing To Prepare For. It really sucks having to go to the store when you're sick because you don't already have the medicine you need at hand. It's better to keep the basics always available, just in case. You, or a guest, are likely to be grateful that you have these on hand. Even if you personally never get sick, I consider a well-stocked medicine cabinet to be a point of hospitality. If you have people over to your place with any frequency, it is nice to have:

Medicine

  • Pain Reliever (ibuprofen, NSAID)
  • Zyrtec (Especially if you have cats. Guests might be allergic!)
  • Antacids, Chewable Pepto, Gas-X (Especially if you have people over for food)
  • Multipurpose contact solution (getting something in your contact without any solution nearby is both rare and awful)
  • Neosporin/bandaids (esp. if your cats scratch :P)

Toiletries

  • Spare toothbrush (esp. if you might have a multi-day guest)
  • Single use disposable toothbrushes (such as Wisp). These are also good to carry with you in your backpack or purse.)
  • Pads/tampons (Yes, even if you're a guy. They should be somewhere obvious such as under the sink, so that your guest doesn't need to ask)
Of course, you can also go all out with your First Aid kit, and also include less common items like epi pens, bandages, etc.

Vehicle Kits

The stuff you keep at home isn't going to be very helpful if you have a minor emergency while travelling. Some things that are useful to keep in your car include

  • Blanket
  • Water
  • Protein/ granola bar
  • Jumper Cables
  • Spare Tire and jack
  • If you get frequent headaches or the like, you might also want to keep your preferred pain reliever or whatnot in the car

Minor Catastrophe Preparation

These are somewhat geography dependent. Adjust for whatever catastrophes are common in your area. There are places where if you don't have 4 wheel drive, you're just not going to be able to leave your house during a snowstorm. There are places where tornadoes or earthquakes are common. There are places where a bad hurricane rolls through every couple years. If you're new to an area, make sure you know what the local "regular" emergency is.

Some of these are a bit of a harder sell, I think. 

  • Flashlights (that you can find in the dark)
  • Spare batteries
  • Candles/Lighter
  • Water (ready.gov says one gallon per person per day, and have enough for 3 days)
  • Non perishable food (ideally that doesn't need to be cooked, e.g. canned goods)
  • Manual can opener
  • Fire Extinguisher
  • Action: check out ready.gov for the emergencies that are most common for your area, and read their recommendations

Bigger Preparations

This list goes a bit beyond the basics:
  • A "Go Bag" (something pre-packed that you can grab and go)
  • A fire-safe lock box (not only does this protect your documents, but it helps in organizing that there is an obvious place where these important documents go, and not just "somewhere in that file drawer...or somewhere else")
  • Back up your data in the cloud
  • Moar water, moar food

 

Recommendations for donating to an anti-death cause

20 fowlertm 09 April 2014 02:56AM

I've recently had the bad luck of having numerous people close to me die. Though I've wanted to contribute to anti-aging and anti-death research for a while, I'm only now in the position of being stable and materially well-off enough to throw around semi-serious cash.

Who should I donate to? I don't want to do anything with cryonics yet; I haven't given cryonics enough thought to be convinced it'd be worth the money. But I was considering the Methuselah foundation.

Suggestions?

Explanations for Less Wrong articles that you didn't understand

18 Kaj_Sotala 31 March 2014 11:19AM

ErinFlight said:

I'm struggling to understand anything technical on this website. I've enjoyed reading the sequences, and they have given me a lot to thing about. Still, I've read the introduction to Bayes theorem multiple times, and I simply can't grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.

Thinking about it, I realized that this might be a common concern. There are probably plenty of people who've looked at various more-or-less technical or jargony Less Wrong posts, tried understanding them, and then given up (without posting a comment explaining their confusion).

So I figured that it might be good to have a thread where you can ask for explanations for any Less Wrong post that you didn't understand and would like to, but don't want to directly comment on for any reason (e.g. because you're feeling embarassed, because the post is too old to attract much traffic, etc.). In the spirit of various Stupid Questions threads, you're explicitly encouraged to ask even for the kinds of explanations that you feel you "should" get even yourself, or where you feel like you could get it if you just put in the effort (but then never did).

You can ask to have some specific confusing term or analogy explained, or to get the main content of a post briefly summarized in plain English and without jargon, or anything else. (Of course, there are some posts that simply cannot be explained in non-technical terms, such as the ones in the Quantum Mechanics sequence.) And of course, you're encouraged to provide explanations to others!

Introducing Skillshare.im

17 peter_hurford 28 March 2014 09:27PM

by Patrick Brinich-Langlois and Ozzie Gooen

-

Communities once kept our ancestors from being torn apart by mountain lions and tyrannosaurus rexes. Dinosaur violence has declined greatly since the Cretaceous, but the world has become more complex and interconnected. Communities remain essential.

Effective altruists have a lot to offer one another. But we're geographically dispersed, so it's hard to know whom to ask for help. Skillshare.im is built to fix this.

Skillshare.im is a place for effective altruists to share their skills, items, and couches with one another.

-

Offer skills or things that you're willing to share. Request items that other people have offered. Here are a few things people have offered on the site:

  • access to academic papers
  • advice on fundraising, careers, nutrition, productivity, startups, investments, etc.
  • French translation (two people!)
  • math tutoring
  • lodging in Switzerland, the Bay Area, London, Melbourne, and Oxford

-

As of this writing, we already have 59 offers from 55 people. With your help, we can make it 60 offers from 56 people!

Why use Skillshare.im, instead of getting the things you need the normal way? Certain things, like career advice or study buddies, can be hard to get. Even if you can find someone who has what you're looking for, you might enjoy the opportunity to relationships with other altruists. Plus, by participating in Skillshare.im, you show that the community of do-gooders is welcoming and supportive, qualities that may draw in new people.

You can be notified of new offers and requests by Twitter or RSS. As with all .impact software, the source code is available on GitHub. We use a publicly accessible Trello board to track bugs and features.

-

We'd love to hear what you think about the site. Is it awesome, or a horrifically inefficient use of our resources? What could be improved? Send us an e-mail or leave a comment.

Human capital or signaling? No, it's about doing the Right Thing and acquiring karma

16 VipulNaik 20 April 2014 09:04PM

There's a huge debate among economists of education on whether the positive relationship between educational attainment and income is due to human capital, signaling, or ability bias. But what do the students themselves believe? Bryan Caplan has argued that students' actions (for instance, their not sitting in for free on classes and their rejoicing at class cancellation) suggest a belief in the signaling model of education. At the same time, he notes that students may not fully believe the signaling model, and that shifting in the direction of that belief might improve individual educational attainment.

Still, something seems wrong about the view that most people believe in the signaling model of education. While their actions are consistent with that view, I don't think they frame it quite that way. I don't think they usually think of it as "education is useless, but I'll go through it anyway because that allows me to signal to potential employers that I have the necessary intelligence and personality traits to succeed on the job." Instead, I believe that people's model of school education is linked to the idea of karma: they do what the System wants them to do, because that's their duty and the Right Thing to do. Many of them also expect that if they do the Right Thing, and fulfill their duties well, then the System shall reward them with financial security and a rewarding life. Others may take a more fateful stance, saying that it's not up to them to judge what the System has in store for them, but they still need to do the Right Thing.

The case of the devout Christian

Consider a reasonably devout Christian who goes to church regularly. For such a person, going to church, and living a life in accordance with (his understanding of) Christian ethics is part of what he's supposed to do. God will take care of him as long as he does his job well. In the long run, God will reward good behavior and doing the Right Thing, but it's not for him to question God's actions.

Such a person might look bemused if you asked him, "Are you a practicing Christian because you believe in the prudential value of Christian teachings (the "human capital" theory) or because you want to give God the impression that you are worthy of being rewarded (the "signaling" theory")?" Why? Partly, because the person attributes omniscience, omnipotence, and omnibenevolence to God, so that the very idea of having a conceptual distinction between what's right and how to impress God seems wrong. Yes, he does expect that God will take care of him and reward him for his goodness (the "signaling" theory). Yes, he also believes that the Christian teachings are prudent (the "human capital" theory). But to him, these are not separate theories but just parts of the general belief in doing right and letting God take care of the rest.

Surely not all Christians are like this. Some might be extreme signalers: they may be deliberately trying to optimize for (what they believe to be) God's favor and maximizing the probability of making the cut to Heaven. Others might believe truly in the prudence of God's teachings and think that any rewards that flow are because the advice makes sense at the worldly level (in terms of the non-divine consequences of actions) rather than because God is impressed by the signals they're sending him through those actions. There are also a number of devout Christians I personally know who, regardless of their views on the matter, would be happy to entertain, examine, and discuss such hypotheses without feeling bemused. Still, I suspect the majority of Christians don't separate the issue, and many might even be offended at second-guessing God.

Note: I selected Christianity and a male sex just for ease of description; similar ideas apply to other religions and the female sex. Also note that in theory, some religious sects emphasize free will and others emphasize determinism more, but it's not clear to me how much effect this has on people's mental models on the ground.

The schoolhouse as church: why human capital and signaling sound ridiculous

Just as many people believe in following God's path and letting Him take care of the rewards, many people believe that by doing the Right Thing educationally (being a Good Student and jumping through the appropriate hoops through correctly applied sincere effort) they're doing their bit for the System. These people might be bemused at the cynicism involved in separating out "human capital" and "signaling" theories of education.

Again, not everybody is like this. Some people are extreme signalers: they openly claim that school builds no useful skills, but grades are necessary to impress future employers, mates, and society at large. Some are human capital extremists: they openly claim that the main purpose is to acquire a strong foundation of knowledge, and they continue to do so even when the incentive from the perspective of grades is low. Some are consumption extremists: they believe in learning because it's fun and intellectually stimulating. And some strategically combine these approaches. Yet, none of these categories describe most people.

I've had students who worked considerably harder on courses than the bare minimum effort needed to get an A. This is despite the fact that they aren't deeply interested in the subject, don't believe it will be useful in later life, and aren't likely to remember it for too long anyway. I think that the karma explanation fits best: people develop an image of themselves as Good Students who do their duty and fulfill their role in the system. They strive hard to fulfill that image, often going somewhat overboard beyond the bare minimum needed for signaling purposes, while still not trying to learn in ways that optimize for human capital acquisition. There are of course many other people who claim to aspire to the label of Good Student because it's the Right Thing, and consider it a failing of virtue that they don't currently qualify as Good Students. Of course, that's what they say, and social desirability bias might play a role in individuals' statements,  but the very fact that people consider such views socially desirable indicates the strong societal belief in being a Good Student and doing one's academic duty.

If you presented the signaling hypothesis to self-identified Good Students they'd probably be insulted. It's like telling a devout Christian that he's in it only to curry favor with God. At the same time, the human capital hypothesis might also seem ridiculous to them in light of their actual actions and experiences: they know they don't remember or understand the material too well. Thinking of it as doing their bit for the System because it's the Right Thing to do seems both noble and realistic.

The impressive success of this approach

At the individual level, this works! Regardless of the relative roles of human capital, signaling, and ability bias, people who go through higher levels of education and get better grades tend to earn better and get more high-status jobs than others. People who transform themselves from being bad students to good students often see rewards both academically and in later life in the form of better jobs. This could again be human capital, signaling, or ability bias. The ability bias explanation is plausible because it requires a lot of ability to turn from a bad student into a good student, about the same as it does to be a good student from the get-go or perhaps even more because transforming oneself is a difficult task.

Can one do better?

Doing what the System commands can be reasonably satisfying, and even rewarding. But for many people, and particularly for the people who do the most impressive things, it's not necessarily the optimal path. This is because the System isn't designed to maximize every individual's success or life satisfaction, or even to optimize things for society as a whole. It's based on a series of adjustments driven by squabbling between competing interests. It could be a lot worse, but a motivated person could do better.

Also note that being a Good Student is fundamentally different from being a Good Worker. A worker, whether directly serving customers or reporting to a boss, is producing stuff that other people value. So, at least in principle, being a better worker translates to more gains for the customers. This means that a Good Worker is contributing to the System in a literal sense, and by doing a better job, directly adds more value. But this sort of reasoning doesn't apply to Good Students, because the actions of students qua students aren't producing direct value. Their value is largely their consumption value to the students themselves and their instrumental value to the students' current and later life choices.

Many of the qualities that define a Good Student are qualities that are desirable in other contexts as well. In particular, good study habits are valuable not just in school but in any form of research that relies on intellectual comprehension and synthesis (this may be an example of the human capital gains from education, except that I don't think most students acquire good study habits). So, one thing to learn from the Good Student model is good study habits. General traits of conscientiousness, hardwork, and willingness to work beyond the bare minimum needed for signaling purposes are also valuable to learn and practice.

But the Good Student model breaks down when it comes to acquiring perspective about how to prioritize between different subjects, and how to actually learn and do things of direct value. A common example is perfectionism. The Good Student may spend hours practicing calculus to get a perfect score in the test, far beyond what's necessary to get an A in the class or an AP BC 5, and yet not acquire a conceptual understanding of calculus or learn calculus in a way that would stick. Such a student has acquired a lot of karma, but has failed from both the human capital perspective (in not acquiring durable human capital) and the signaling perspective (in spending more effort than is needed for the signal). In an ideal world, material would be taught in a way that one can score highly on tests if and only if it serves useful human capital or signaling functions, but this is often not the case.

Thus, I believe it makes sense to critically examine the activities one is pursuing as a student, and ask: "does this serve a useful purpose for me?" The purpose could be human capital. signaling, pure consumption, or something else (such as networking). Consider the following four extreme answers a student may give to why a particular high school or college course matters:

  • Pure signaling: A follow-up might be: "how much effort would I need to put in to get a good return on investment as far as the signaling benefits go?" And then one has to stop at that level, rather than overshoot or undershoot.
  • Pure human capital: A follow-up might be: "how do I learn to maximize the long-term human capital acquired and retained?" In this world, test performance matters only as feedback rather than as the ultimate goal of one's actions. Rather than trying to practice for hours on end to get a perfect score on a test, more effort will go into learning in ways that increase the probability of long-term retention in ways that are likely to prove useful later on. (As mentioned above, in an ideal world, these goals would converge).
  • Pure consumption: A follow-up might be: "how much effort should I put in in order to get the maximum enjoyment and stimulation (or other forms of consumptive experience), without feeling stressed or burdened by the material?"
  • Pure networking: A follow-up might be: "how do I optimize my course experience to maximize the extent to which I'm able to network with fellow students and instructors?"

One might also believe that some combination of these explanations applies. For instance, a mixed human capital-cum-signaling explanation might recommend that one study all topics well enough to get an A, and then concentrate on acquiring a durable understanding of the few subtopics that one believes are needed for long-term knowledge and skills. For instance, a mastery of fractions matters a lot more than a mastery of quadratic equations, so a student preparing for a middle school or high school algebra course might choose to learn both at a basic level but get a really deep understanding of fractions. Similarly, in calculus, having a clear idea of what a function and derivative means matters a lot more than knowing how to differentiate trigonometric functions, so a student may superficially understand all aspects (to get the signaling benefits of a good grade) but dig deep into the concept of functions and the conceptual definition of derivatives (to acquire useful human capital). By thinking clearly about this, one may realize that perfecting one's ability to differentiate complicated trigonometric function expressions or integrate complicated rational functions may not be valuable from either a human capital perspective or a signaling perspective.

Ultimately, the changes wrought by consciously thinking about these issues are not too dramatic. Even though the System is suboptimal, it's locally optimal in small ways and one is constrained in one's actions in any case. But the changes can nevertheless add up to lead one to be more strategic and less stressed, do better on all fronts (human capital, signaling, and consumption), and discover opportunities one might otherwise have missed.

Community overview and resources for modern Less Wrong meetup organisers

16 BraydenM 04 April 2014 08:53PM

I've been travelling around the US for the past month since arriving from Australia, and have had the chance to see how a number of different Less Wrong communities operate. As a departing organiser for the Melbourne Less Wrong community, it has been interesting to make comparisons between the different Less Wrong groups all over the US, and I suspect sharing the lessons learned by different communities will benefit the global movement.

For aspiring organisers, or leaders looking at making further improvements to their community, there already exists an excellent meetup organisers handbook, list of meetups, and NYC case study. I'd also recommend one super useful ability: rapid experimentation. This is a relatively low cost way to find out exactly what format of events attracts the most people and are the most beneficial. Once you know how to win, spam it! This ability is sometimes even better than just asking people what they want out of the community, but you should probably do both.

I'll summarise a few types of meetup that I have seen here. Please feel free to help out by adding descriptions of other types of events you have seen, or variations on the ones already posted if you think there is something other communities could learn. 

Public Practical Rationality Meetups (Melbourne)

Held monthly on a Friday in Matthew Fallshaw's offices at TrikeApps. Advertised on Facebook, LessWrong, and the Melbourne LW Mailing List. About 25-40 attendees. Until January, were also advertised publicly on meetup.com, but since then the format has changed significantly. Audience was 50% Less Wrongers, and 50% newcomers, so this served as our outreach event. 

6:30pm-7:30pm Doors open, usually most people arrive around 7:15pm

7:30pm sharp-9:00pm: Content introduced. Usually around 3 topics have been prepared by 3 separate Less Wrongers, for discussion in groups of about 10 people each. After 30 minutes the groups rotate, so the presenters present the same thing multiple times. Topics have included: effective communication, giving and receiving feedback, sequence summaries, cryonics, habit formation, etc.

9:00pm - Late: Unstructured socialising, with occasional 'rationality therapy' where a few friends get together to think about a particular issue in someone's life in detail. Midnight souvlaki runs are a tradition.

Monthly Social Games Meetup (Melbourne)

Held in a private residence on a Friday, close to central city public transport. Advertised on Facebook, LessWrong, and the Melbourne LW Mailing List. About 15-25 attendees. Snacks provided by the host.

6:30pm - Late: People show up whenever and there are lots of great conversations. Mafia, (science themed) Zendo, and a variety of board games are popular, but the majority of the night is usually spent talking about what people have learned or read recently. There are enough discussions happening that it is usually easy to find an interesting group to join. Delivery dinner is often ordered, and many people stay quite late.

Large public salons (from Rafael Cosman, Stanford University)

Held on campus in a venue provided by the university. Advertised on a custom mailing list, and presumably facebook/word of mouth. Audience is mostly unfamiliar with Less Wrong Material, and this event is has not yet officially become associated with Less Wrong, but Rafael is in the process of getting a spin-off LW specific meetup happening.

7pm-7:30pm: Guests trickle in. Light background music helps inform the first arrivals that they are indeed at the right place.

7:30pm-7:45pm: Introductions, covering 1. Who you are 2. One thing that people should talk to you about (e.g. "You should talk to me about Conway's Game of Life" 3. One thing that people could come and do with you sometime (e.g. "Come and join me for yoga on Sunday mornings"

7:45pm-9:30pm: Short talks on a variety of topics. At the end of a presentation, instead of tossing it open for questions, everyone comes up to give the speaker a high-five, and then the group immediately enters unstructured discussion for 5-10 minutes. This allows people with pressing questions to go up and ask the speaker, but also allows everyone else to break out to mingle rather than being passive.

Still to come: New York, Austin, and the SF East and South Bay meetup formats.

How do you approach the problem of social discovery?

15 InquilineKea 21 April 2014 09:05PM

As in, how do you find ways to meet the right people you talk to? Presumably, they would have personality fit with you, and be high on both intelligence and openness. Furthermore, they would be in the point of their life where they are willing to spend time with you (although sometimes you can learn a lot from people simply by friending them on Facebook and just observing their feeds from time to time).

Historically, I've made myself extremely stalkable on the Internet. In retrospect, I believe that this "decision" is on the order of one of the very best decisions I've ever made in my life, and has made me better at social discovery than most people I know, despite my dual social anxiety and Asperger's. In fact, if a more extroverted non-Aspie could do the same thing, I think they could do WONDERS with developing an online profile.

I've also realized more that social discovery is often more rewarding when done with teenagers. You can do so much to impact teenagers, and they often tend to be a lot more open to your ideas/musings (just as long as you're responsible).

But I've wondered - how else have you done it? Especially in real life? What are some other questions you ask with respect to social discovery? I tend to avoid real life for social discovery simply because it's extremely hit-and-miss, but I've discovered (from Richard Florida's books) that the Internet often strengthens real-life interaction because it makes it so much easier to discover other people in real life (and then it's in real life when you can really get to know people).

The Cold War divided Science

15 Douglas_Knight 05 April 2014 11:10PM

What can we learn about science from the divide during the Cold War?

I have one example in mind: America held that coal and oil were fossil fuels, the stored energy of the sun, while the Soviets held that they were the result of geologic forces applied to primordial methane.

At least one side is thoroughly wrong. This isn't a politically charged topic like sociology, or even biology, but a physical science where people are supposed to agree on the answers. This isn't a matter of research priorities, where one side doesn't care enough to figure things out, but a topic that both sides saw to be of great importance, and where they both claimed to apply their theories. On the other hand, Lysenkoism seems to have resulted from the practical importance of crop breeding.

First of all, this example supports the claim that there really was a divide, that science was disconnected into two poorly communicating camps. It suggests that when the two sides reached the same results on other topics, they did so independently. Even if we cannot learn from this example, it suggests that we may be able to learn from other consequences of dividing the scientific community.

My understanding is that although some Russian language research papers were available in America, they were completely ignored and the scientists failed to even acknowledge that there was a community with divergent opinions. I don't know about the other direction.

Some questions:

  • Are there other topics, ideally in physical science, on which such a substantial disagreement persisted for decades? not necessarily between these two parties?
  • Did the Soviet scientists know that their American counterpoints disagreed?
  • Did Warsaw Pact (eg, Polish) scientists generally agree with the Soviets about the origin of coal and oil? Were they aware of the American position? Did other Western countries agree with America? How about other countries, such as China and Japan?
  • What are the current Russian beliefs about coal and oil? I tried running Russian Wikipedia through google translate and it seemed to support the biogenic theory. (right?) Has there been a reversal among Russian scientists? When? Or does Wikipedia represent foreign opinion? If a divide remains, does it follow the Iron Curtain, or some new line?
  • Have I missed some detail that would make me not classify this as an honest disagreement between two scientific establishments?
  • Finally, the original question: what can we learn about the institution of science?

The effect of effectiveness information on charitable giving

14 Unnamed 15 April 2014 04:43PM

A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.

The Abstract:

We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness.

In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness:

In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that women who were offered our Credit with Education program had 16% higher profits in their businesses than those who were not, and they increased profits in bad months by 27%! This is particularly important because it means our program helped women generate more stable incomes throughout the year.

These independent researchers used a randomized evaluation, the methodology routinely used in medicine, to measure the impact of our programs on things like business growth, children's health, investment in education, and women's empowerment.

In the control condition, the mailing instead included this paragraph:

Many people would have met Rita and decided she was too poor to repay a loan. Five hungry children and a small plot of mango trees don’t count as collateral. But Freedom from Hunger knows that women like Rita are ready to end hunger in their own families and in their communities.

My Heartbleed learning experience and alternative to poor quality Heartbleed instructions.

14 aisarka 15 April 2014 08:15AM

Due to the difficulty of finding high-quality Heartbleed instructions, I have discovered that perfectly good, intelligent rationalists either didn't do all that was needed and ended up with a false sense of security or did things that increased their risk without realizing it and needed to take some additional steps.  Part of the problem is that organizations who write for end users do not specialize in computer security and vice versa, so many of the Heartbleed instructions for end users had issues.  The issues range from conflicting and confusing information to outright ridiculous hype.  As an IT person and a rationalist, I knew better than to jump to the proposing solutions phase before researching [1].  Recognizing the need for well thought out Heartbleed instructions, I spent 10-15 hours sorting through the chaos to create more comprehensive Heartbleed instructions.  I'm not a security expert, but as an IT person who has read about computer security out of a desire for professional improvement and also due to curiosity and is familiar with various research issues, cognitive biases, logical fallacies, etc, I am not clueless either.  In light of this being a major event that some sources are calling one of the worst security problems ever to happen on the Internet [2], that has been proven to be more than a theoretical risk (Four people hacked the keys to the castle out of Cloudflare's challenge in just one day.) [3], that has been badly exploited (900 Canadian social insurance numbers were leaked today. [4]), and some evidence exists that it may have been used for spying for a long time (EFF found evidence of someone spying on IRC conversations. [5]), I think it's important to share my compilation of Heartbleed instructions just so that a better list of instructions is out there.  More importantly, this disaster is a very rare rationality learning opportunity: reflecting on our behavior and comparing it with what we realize we should have done after becoming more informed may help us see patches of irrationality that could harm us during future disasters.  For that reason, I did some rationality checks on my own behavior by asking myself a set of questions.  I have of course included the questions.

 

Heartbleed Research Challenges this Post Addresses:

  - There are apparent contradictions between sources about which sites were affected by Heartbleed, which sites have updated for Heartbleed, which sites need a password reset, and whether to change your passwords now or wait until the company has updated for Heartbleed.  For instance, Yahoo said Facebook was not vulnerable. [6] LastPass said Facebook was confirmed vulnerable and recommended a password update. [7]

  - Companies are putting out a lot of "fluffspeek"*, which makes it difficult to figure out which of your accounts have been affected, and which companies have updated their software.

  - Most sources *either* specialize in writing for end-users *or* are credible sources on computer security, not both.

  - Different articles have different sets of Heartbleed instructions.  None of the articles I saw contained every instruction.

  - A lot of what's out there is just ridiculous hype. [8]

 

Disclaimer

I am not a security specialist, nor am I certified in any security-related area.  I am an IT person who has randomly read a bunch of security literature over the last 15 years, but there *is* a definite quality difference between an IT person who has read security literature and a professional who is dedicated to security.  I can't give you any guarantees (though I'm not sure it's wise to accept that from the specialists either).  Another problem here is time.  I wanted to act ASAP.  With hackers on the loose, I do not think it wise to invest the time it would take me to create a Gwern style masterpiece.  This isn't exactly slapped together, but I am working within time constraints, so it's not perfect.  If you have something important to protect, or have the money to spend, consult a security specialist.

 

Compilation of Heartbleed Instructions


  Beware fraudulent password reset emails and shiny Heartbleed fixes.

  With all the real password reset emails going around, there are a lot of scam artists out there hoping to sneak in some dupes.  A lot of people get confused.  It doesn't mean you're stupid.  If you clicked a nasty link, or even if you're not sure, call the company's fraud department immediately.  That's why they're there. [9]  Always be careful about anything that seems too good to be true, as the scam artists have also begun to advertise Heartbleed "fixes" as bait.


  If the site hasn't done an update, it's risky to change your password.

  Why: This may increase your risk.  If Heartbleed isn't fixed, any new password you type in could be stolen, and a lot of criminals are probably doing whatever they can to exploit Heartbleed right now since they just found out about it.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  If you use digital password storing, consider whether it is secure.

  Some digital password storing software is way better than others.  I can't recommend one, but be careful which one you choose.  Also, check them for Heartbleed.


  If you already changed your password, and then a site updates or says "change your password" do it again.

  Why change it twice?: If you changed it before the update, you were sending that new password over a connection with a nasty security flaw.  Consider that password "potentially stolen" and make a new one.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  If a company says "no need to change your password" do you really want to believe them?

  There's a perverse incentive for companies to tell you "everything is fine" when in fact it is not fine, because nobody wants to be seen as having bad security on their website.  Also, if someone did steal your password through this bug, it's not traceable to the bug.  Companies could conceivably claim "things are fine" without much accountability.  "Exploitation of this bug leaves no traces of anything abnormal happening to the logs." [11] I do not know whether, in practice, companies respond to similar perverse incentives, or if some unknown thing keeps them in check, but I have observed plenty of companies taking advantage of other perverse incentives.  Health care rescission for instance.  That affected much more important things than data.


  When a site has done a Heartbleed update, *then* change your password.

  That's the time to do it. "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]


  Security Questions

  Nothing protected your mother's maiden name or the street you grew up on from Heartbleed any more than your passwords or other data.  A stolen security question can be a much bigger risk than a stolen password, especially if you used the same one on multiple different accounts.  When you change your password, also consider whether you should change your security questions.  Think about changing them to something hard to guess, unique to that account, and remember that you don't have to fill out your security questions with accurate information.  If you filled the questions out in the last two years, there's a risk that they were stolen, too.


  How do I know if a site updated?

 

  Method One:

    Qualys SSL Labs, an Information Security Provider created a free SSL Server Test.  Just plug in the domain name and Qualys will generate a report.  Yes, it checks the certificate, too.  (Very important.)

    Qualys Server Test

 

  Method Two:

    CERT, a major security flaw advisory publisher, listed some (not all!) of the sites that have updated.  If you want a list, you should use CERT's list, not other lists. 

    CERT's List

    Why CERT's list?  Hearing "not vulnerable" on some news website's list does not mean that any independent organization verified that the site was fine, nor that an independent organization even has the ability to verify that the site has been safe for the entire last two years.  If anyone can do that job, it would be CERT, but I am not unaware of tests of their abilities in that regard.  Also, there is no fluffspeek*.


  Method Three:

    Search the site itself for the word "Heartbleed" and read the articles that come up.  If the site had to do a Heartbleed update, change your password.  Here's the quick way to search a whole site in Google (do not add "www"):

    site:websitename.com Heartbleed


  If an important site hasn't updated yet:

  If you have sensitive data stored there, don't log into that site until it's fixed.  If you want to protect it, call them up and try to change your password by phone or lock the account down.  "Stick to reputable websites and services, as those sites are most likely to have addressed the vulnerability right away." [10]


  Check your routers, mobile phones, and other devices.

  Yes, really. [13] [14]


  If you have even the tiniest website:

  Don't think "There's nothing to steal on my website".  Spammers always want to get into your website.  Hackers make software that exploits bugs and can share or sell that software.  If a hacker shares a tool that exploits Heartbleed and your site is vulnerable, spammers will get the tool and could make a huge mess out of everything.  That can get you blacklisted and disrupt email, it can get you removed from Google search engine results, it can disrupt your online advertising ... it can be a mess.

  Get a security expert involved to look for all the places where Heartbleed may have caused a security risk on your site, preferably one who knows about all the different services that your website might be using.  "Services" meaning things like a vendor that you pay so your website can send bulk text messages for two-factor authentication, or a free service that lets users do "social sign on" to log into your site with an external service like Yahoo.  The possibilities for Heartbleed to cause problems on your website, through these kinds of services, is really pretty enormous.  Both paid services and free services could be affected.

  A sysadmin needs to check the server your site is on to figure out if it's got the Heartbleed bug and update it.

  Remember to check your various web providers like domain name registration services, web hosting company, etc.


Rationality Learning Opportunity (The Questions)

We won't get many opportunities to think about how we react in a disaster.  For obvious ethical reasons, we can't exactly create disasters in order to test ourselves.  I am taking the opportunity to reflect on my reactions and am sharing my method for doing this.  Here are some questions I asked myself which are designed to encourage reflection.  I admit to having made two mistakes at first: I did not apply rigorous skepticism to each news source right from the very first article I read, and the mistake of underestimating the full extent of what it would take to address the issue.  What saved me was noticing my confusion.

  When you first heard about Heartbleed, did you fail to react?  (Normalcy bias)

  When you first learned about the risk, what probability did you assign to being affected by it?  What probability do you assign now?  (Optimism bias)

  Were you surprised to find out that someone in your life did not know about Heartbleed, and regret not telling them when it had occurred to you to tell them?  (Bystander effect)

  What did you think it was going to take to address Heartbleed?  Did you underestimate what it would take to address it competently?  (Dunning-Kruger effect)

  After reading news sources on Heartbleed instructions, were you surprised later that some of them were wrong?

  How much time did you think it would take to address the issue?  Did it take longer?  (Planning fallacy)

  Did you ignore Heartbleed?  (Ostrich effect)


*Fluffspeek:

Companies, of course, want to present a respectable face to customers, so most of them are not just coming out and saying "We were affected by Heartbleed.  We have updated.  It's time to change your password now."  Instead, some have been writing fluff like:

  "We see no evidence that data was stolen."

  According to the company that found this bug, Heartbleed doesn't leave a trail in the logs. [15] If someone did steal your password, would there be evidence anyway?  Maybe some really were able to rule that out somehow.  Positivity bias, a type of confirmation bias, is an important possibility here.  Maybe, like many humans, these companies simply failed to "Look into the dark" [16] and think of alternate explanations for the evidence they're seeing (or not seeing, which can sometimes be evidence [17], but not useful evidence in this case).

  "We didn't bother to tell you whether we updated for Heartbleed, but it's always a good idea to change your password however often."

  Unless you know each website has updated for Heartbleed, there's a chance that you're going to go out and send your new passwords right through a bunch of website's Heartbleed security holes as you're changing them.  Now that Heartbleed is big news, every hacker and script kiddie on planet earth probably knows about it, which means there are probably way more people trying to steal passwords through Heartbleed than before.  Which is the greater risk?  Entering in a new password while the site is leaking passwords in a potentially hacker-infested environment, or leaving your potentially stolen password there until the site has updated?  Worse, if people *did not* change their password after the update because they already changed it *before* the update, they've got a false sense of security about the probability that their password was stolen.  Maybe some these companies updated for Heartbleed before saying that.  Maybe the bug was completely non-applicable for them.  Regardless, I think end users deserve to know that updating their password before the Heartbleed update carries a risk.  Users need to be told whether an update has been applied.  As James Lynn wrote for Forbes, "Forcing customers to guess or test themselves is just negligent." [8]

"Fluffspeek" is a play on "leetspeek", a term used to describe bits of text full of numbers and symbols that is attributed to silly "hackers".  Some PR fluff may be a deliberate attempt to exploit others, similar in some ways to the manipulation techniques popular among black hat hackers, called social engineering.  Even when it's not deliberate, this kind of garbage is probably about as ugly to most people with half a brain as "I AM AN 31337 HACKER!!!1", so is still fitting.

 

References:

 1. http://lesswrong.com/lw/ka/hold_off_on_proposing_solutions/

 2. http://money.cnn.com/2014/04/09/technology/security/Heartbleed-bug/

 3. http://blog.cloudflare.com/the-results-of-the-cloudflare-challenge

 4. http://www.cra-arc.gc.ca/gncy/sttmnt2-eng.html

 5. https://www.eff.org/deeplinks/2014/04/wild-heart-were-intelligence-agencies-using-Heartbleed-november-2013

 6. http://finance.yahoo.com/blogs/breakout/Heartbleed-security-flaw--how-to-protect-yourself-172552932.html

 7. https://lastpass.com/Heartbleed/?h=facebook.com

 8. Forbes.com "Avoiding Heartbleed Hype, What To Do To Stay Safe" (I can't link to this for some reason but you can do a search.)

 9. http://www.net-security.org/secworld.php?id=16671

 10. http://www.cnbc.com/id/101569136

 11. http://Heartbleed.com/

 12. https://community.norton.com/t5/Norton-Protection-Blog/Heartbleed-Bug-What-You-Need-to-Know-and-Security-Tips/ba-p/1120128

 13. http://online.wsj.com/news/articles/SB10001424052702303873604579493963847851346

 14. Forbes.com "A Billion Smartphone Users May Be Affected by the Heartbleed Security Flaw" (I can't link to this for some reason but you can do a search.)

 15. http://Heartbleed.com/

 16. http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/

 17. http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/

Southern California FAI Workshop

13 Coscott 20 April 2014 08:55AM

This Saturday, April 26th, we will be holding a one day FAI workshop in southern California, modeled after MIRI's FAI workshops. We are a group of individuals who, aside from attending some past MIRI workshops, are in no way affiliated with the MIRI organization. More specifically, we are a subset of the existing Los Angeles Less Wrong meetup group that has decided to start working on FAI research together. 

The event will start at 10:00 AM, and the location will be:

USC Institute for Creative Technologies
12015 Waterfront Drive
Playa Vista, CA 90094-2536.

This first workshop will be open to anyone who would like to join us. If you are interested, please let us know in the comments or by private message. We plan to have more of these in the future, so if you are interested but unable to makethis event, please also let us know. You are welcome to decide to join at the last minute. If you do, still comment here, so we can give you necessary phone numbers.

Our hope is to produce results that will be helpful for MIRI, and so we are starting off by going through the MIRI workshop publications. If you will be joining us, it would be nice if you read the papers linked to here, here, here, here, and here before Saturday. Reading all of these papers is not necessary, but it would be nice if you take a look at one or two of them to get an idea of what we will be doing.

Experience in artificial intelligence will not be at all necessary, but experience in mathematics probably is. If you can follow the MIRI publications, you should be fine. Even if you are under-qualified, there is very little risk of holding anyone back or otherwise having a negative impact on the workshop. If you think you would enjoy the experience, go ahead and join us.

This event will be in the spirit of collaboration with MIRI, and will attempt to respect their guidelines on doing research that will decrease, rather than increase, existential risk. As such, practical implementation questions related to making an approximate Bayesian reasoner fast enough to operate in the real world will not be on-topic. Rather, the focus will be on the abstract mathematical design of a system capable of having reflexively consistent goals, preforming naturalistic induction, et cetera. 

Food and refreshments will be provided for this event, courtesy of MIRI.

Evaluating GiveWell as a startup idea based on Paul Graham's philosophy

13 VipulNaik 12 April 2014 02:04PM

Effective altruism is a growing movement, and a number of organizations (mostly foundations and nonprofits) have been started in the domain. One of the very first of these organizations, and arguably the most successful and influential, has been charity evaluator GiveWell. In this blog post, I examine the early history of GiveWell and see what factors in this early history helped foster its success.

My main information source is GiveWell's original business plan (PDF, 86 pages). I'll simply refer to this as the "GiveWell business plan" later in the post and will not link to the source each time. If you're interested in what the GiveWell website looked like at the time, you can browse the website as of early May 2007 here.

To provide more context to GiveWell's business plan, I will look at it in light of Paul Graham's pathbreaking article How to Get Startup Ideas. The advice here is targeted at early stage startups. GiveWell doesn't quite fit the "for-profit startup" mold, but GiveWell in its early stages was a nonprofit startup of sorts. Thus, it would be illustrative to see just how closely GiveWell's choices were in line with Paul Graham's advice.

There's one obvious way that this analysis is flawed and inconclusive: I do not systematically compare GiveWell with other organizations. There is no "control group" and no possibility of isolating individual aspects that predicted success. I intend to write additional posts later on the origins of other effective altruist organizations, after which a more fruitful comparison can be attempted. I think it's still useful to start with one organization and understand it thoroughly. But keep this limitation in mind before drawing any firm conclusions, or believing that I have drawn firm conclusions.

The idea: working on a real problem that one faces at a personal level, is acutely familiar with, is of deep interest to a (small) set of people right now, and could eventually be of interest to many people

Graham writes (emphasis mine):

The very best startup ideas tend to have three things in common: they're something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

[...]

When a startup launches, there have to be at least some users who really need what they're making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you're making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can't expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that's broad but shallow, or one that's narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they'll use it even when it's a crappy version one made by a two-person startup they've never heard of? If you can't answer that, the idea is probably bad. [3]

You don't need the narrowness of the well per se. It's depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it's a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it's not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

GiveWell in its early history seems like a perfect example of this:

  • Real problem experienced personally: The problem of figuring out how and where to donate money was a personal problem that the founders experienced firsthand as customers, so they knew there was a demand for something like GiveWell.
  • Of deep interest to some people: The people who started GiveWell had a few friends who were in a similar situation: they wanted to know where best to donate money, but did not have enough resources to do a full-fledged investigation. The number of such people may have been small, but since these people were intending to donate money in the thousands of dollars, there were enough of them who had deep interest in GiveWell's offerings.
  • Could eventually be of interest to many people: Norms around evidence and effectiveness could change gradually as more people started identifying as effective altruists. So, there was a plausible story for how GiveWell might eventually influence a large number of donors across the range from small donors to billionaires.

Quoting from the GiveWell business plan (pp. 3-7, footnotes removed; bold face in original):

GiveWell started with a simple question: where should I donate?

We wanted to give. We could afford to give. And we had no prior commitments to any particular charity; we were just looking for the channel through which our donations could help people (reduce suffering; increase opportunity) as much as possible.

The first step was to survey our options. We found that we had more than we could reasonably explore comprehensively. There are 2,625 public charities in the U.S. with annual budgets over $100 million, 88,812 with annual budgets over $1 million. Restricting ourselves to the areas of health, education (excluding universities), and human services, there are 480 with annual budgets over $100 million, 50,505 with annual budgets over $1 million.

We couldn’t explore them all, but we wanted to find as many as possible that fit our broad goal of helping people, and ask two simple questions: what they do with donors’ money, and what evidence exists that their activities help people?

Existing online donor resources, such as Charity Navigator, give only basic financial data and short, broad mission statements (provided by the charities and unedited). To the extent they provide metrics, they are generally based on extremely simplified, problematic assumptions, most notably the assumption that the less a charity spends on administrative expenses, the better. These resources could not begin to help us with our questions, and they weren’t even very useful in narrowing the field (for example, even if we assumed Charity Navigator’s metrics to be viable, there are 1,277 total charities with the highest possible rating, 562 in the areas of health, education and human services).

We scoured the Internet, but couldn’t find the answers to our questions either through charities’ own websites or through the foundations that fund them. It became clear to us that answering these questions was going to be a lot of work. We formed GiveWell as a formal commitment to doing this work, and to putting everything we found on a public website so other donors wouldn’t have to repeat what we did. Each of the eight of us chose a problem of interest (malaria, microfinance, diarrheal disease, etc.) – this was necessary in order to narrow our scope – and started to evaluate charities that addressed the problem.

[...]

We immediately found that there are enormous opportunities to help people, but no consensus whatsoever on how to do it best. [...]

Realizing that we were trying to make complex decisions, we called charities and questioned them thoroughly. We wanted to see what our money was literally being spent on, and for charities with multiple programs and regions of focus we wanted to know how much of their budget was devoted to each. We wanted to see statistics – or failing that, stories – about people
who’d benefited from these programs, so we could begin to figure out what charities were pursuing the best strategies. But when we pushed for these things, charities could not provide them.

They responded with surprise (telling us they rarely get questions as detailed as ours, even from multi-million dollar donors) and even suspicion (one executive from a large organization accused Holden of running a scam, though he wouldn’t explain what sort of scam can be run using information about a charity’s budget and activities). See Appendix A for details of these exchanges. What we saw led us to conclude that charities were neither accustomed to nor capable of answering our basic questions: what do you do, and what is the evidence that it works?

This is why we are starting the Clear Fund, the world’s first completely transparent charitable grantmaker. It’s not because we were looking for a venture to start; everyone involved with this project likes his/her current job. Rather, the Clear Fund comes simply from a need for a resource that doesn’t exist: an information source to help donors direct their money to where it will accomplish the most good.

We feel that the questions necessary to decide between charities aren’t being answered or, largely, asked. Foundations often focus on new projects and innovations, as opposed to scaling up proven ways of helping people; and even when they do evaluate the latter, they do not make what they find available to foster dialogue or help other donors (see Appendix D for more on this). Meanwhile, charities compete for individual contributions in many ways, from marketing campaigns to personal connections, but not through comparison of their answers to our two basic questions. Public scrutiny, transparency, and competition of charities’ actual abilities to improve the world is thus practically nonexistent. That makes us worry about the quality of their operations – as we would for any set of businesses that doesn’t compete on quality – and without good operations, a charity is just throwing money at a problem.

[...]

With money and persistence, we believe we can get the answers to our questions – or at least establish the extent to which different charities are capable of answering them. If we succeed, the tremendous amount of money available for solving the world’s problems will become better spent, and the world will reap enormous benefits. We believe our project will accomplish the following:
1. Help individual donors find the best charities to give to. [...]

2. Foster competition to find the best ways of improving the world. [...]

3. Foster global dialogue between everyone interested – both amateur and professional –
in the best tactics for improving the world.
[...]

4. Increase engagement and participation in charitable causes. [...]

All of the benefits above fall under the same general principle. The Clear Fund will put a new focus on the strategies – as opposed to the funds – being used to attack the world’s problems.

How do you know if the idea is scalable? You just gotta be the right person

We already quoted above GiveWell's reasons for believing that their idea could eventually influence a large volume of donations. But how could we know at the time whether their beliefs were reasonable? Graham writes (emphasis mine):

How do you tell whether there's a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can't. The founders of Airbnb didn't realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn't foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That's probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it's obvious from the beginning when there's a path out of the initial niche. And sometimes I can see a path that's not immediately obvious; that's one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can't predict whether there's a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you're the right sort of person, you have the right sort of hunches. If you're at the leading edge of a field that's changing fast, when you have a hunch that something is worth doing, you're more likely to be right.

How well does GiveWell fare in terms of the potential of the people involved? Were the people who founded GiveWell (specifically Holden Karnofsky and Elie Hassenfeld) the "right sort of person" to found GiveWell? It's hard to give an honest answer that's not clouded by information available in hindsight. But let's try. On the one hand, neither of the co-founders had direct experience working with nonprofits. However, they had both worked in finance and the analytical skills they employed in the financial industry may have been helpful when they switched to analyzing evidence and organizations in the nonprofit sector (see the "Our qualifications" section of the GiveWell business plan). Arguably, this was more relevant to what they wanted to do with GiveWell than direct experience with the nonprofit world. Overall, it's hard to say (without the benefits of hindsight or inside information about the founders) that the founders were uniquely positioned, but the outside view indicators seem generally favorable.

Post facto, there seems to be some evidence that GiveWell's founders exhibited good aesthetic discernment. But this is based on GiveWell's success, so invoking that as a reason is a circular argument.

Schlep blindness?

In a different essay titled Schlep Blindness, Graham writes:

There are great startup ideas lying around unexploited right under our noses. One reason we don't see them is a phenomenon I call schlep blindness. Schlep was originally a Yiddish word but has passed into general use in the US. It means a tedious, unpleasant task.

[...]

One of the many things we do at Y Combinator is teach hackers about the inevitability of schleps. No, you can't start a startup by just writing code. I remember going through this realization myself. There was a point in 1995 when I was still trying to convince myself I could start a company by just writing code. But I soon learned from experience that schleps are not merely inevitable, but pretty much what business consists of. A company is defined by the schleps it will undertake. And schleps should be dealt with the same way you'd deal with a cold swimming pool: just jump in. Which is not to say you should seek out unpleasant work per se, but that you should never shrink from it if it's on the path to something great.

[...]

How do you overcome schlep blindness? Frankly, the most valuable antidote to schlep blindness is probably ignorance. Most successful founders would probably say that if they'd known when they were starting their company about the obstacles they'd have to overcome, they might never have started it. Maybe that's one reason the most successful startups of all so often have young founders.

In practice the founders grow with the problems. But no one seems able to foresee that, not even older, more experienced founders. So the reason younger founders have an advantage is that they make two mistakes that cancel each other out. They don't know how much they can grow, but they also don't know how much they'll need to. Older founders only make the first mistake.

It could be argued that schlep blindness was the reason nobody else had started GiveWell before GiveWell. Most people weren't even thinking of doing something like this because the idea seemed like so much work that nobody went near it. Why then did GiveWell's founders select the idea? There's no evidence to suggest that Graham's "ignorance" remedy was the reason. Rather, the GiveWell business plan explicitly embraces complexity. In fact, one of their early section titles is Big Problems with Complex Solutions. It seems like the GiveWell founders found challenge more exciting than deterring. Lack of intimate knowledge with the nonprofit sector might have been a factor, but it probably wasn't a driving one.

Competition

Graham writes:

Because a good idea should seem obvious, when you have one you'll tend to feel that you're late. Don't let that deter you. Worrying that you're late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you're probably not too late. It's exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don't discard the idea.

If you're uncertain, ask users. The question of whether you're too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

[...]

You don't need to worry about entering a "crowded market" so long as you have a thesis about what everyone else in it is overlooking. In fact that's a very promising starting point. Google was that type of idea. Your thesis has to be more precise than "we're going to make an x that doesn't suck" though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn't have the courage of their convictions, and that your plan is what they'd have done if they'd followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there's demand and that none of the existing solutions are good enough. A startup can't hope to enter a market that's obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Did GiveWell enter a crowded market? As Graham suggests above, it depends heavily on how you define the market. Charity Navigator existed at the time, and GiveWell and Charity Navigator compete to serve certain donor needs. But they are also sufficiently different. Here's what GiveWell said about Charity Navigator in the GiveWell business plan:

Existing online donor resources, such as Charity Navigator, give only basic financial data and short, broad mission statements (provided by the charities and unedited). To the extent they provide metrics, they are generally based on extremely simplified, problematic assumptions, most notably the assumption that the less a charity spends on administrative expenses, the better. These resources could not begin to help us with our questions, and they weren’t even very useful in narrowing the field (for example, even if we assumed Charity Navigator’s metrics to be viable, there are 1,277 total charities with the highest possible rating, 562 in the areas of health, education and human services)

In other words, GiveWell did enter a market with existing players, indicating that there was a need for things in the broad domain that GiveWell was offering. At the same time, what GiveWell offered was sufficiently different that it was not bogged down by the competition.

Incidentally, in recent times, people from Charity Navigator have been critical of GiveWell and other "effective altruism" proponents. Their critique has itself come for some criticism, and some people have argued that this may be a response to GiveWell's growth leading to it moving the same order of magnitude of money as Charity Navigator (see the discussion here for more). Indeed, in 2013, GiveWell surpassed Charity Navigator in money moved through the website, though we don't have clear evidence of whether GiveWell is cutting into Charity Navigator's growth.

Other precursors (of sorts) to GiveWell, mentioned by William MacAskill in a Facebook comment, are the Poverty Action Lab, Copenhagen Consensus.

How prescient was GiveWell?

With the benefit of hindsight, how impressive do we find GiveWell's early plans in predicting its later trajectory? Note that prescience in predicting the later trajectory could also be interpreted as rigidity of plan and unwillingness to change. But since GiveWell appears to have been quite a success, there is a prior in favor of prescience being good (what I mean is that if GiveWell had failed, the fact that they predicted all the things they'd do would be the opposite of impressive, but given their success, the fact that they predicted things in advance also indicates that they chose good strategy from the outset).

Note that I'm certainly not claiming that a startup's failure to predict the future should be a big strike against it. As long as the organization can adapt to and learn from new information, it's fine. But of course, getting more things right from the start is better to the extent it's feasible.

By and large, both the vision and the specific goals outlined in the plan were quite prescient. I noted the following differences between the plan then and the reality as it transpired:

  • In the plan, GiveWell said it would try to identify top charities in a few select areas (they listed seven areas) and refrain from comparing very different domains. Over the years, they have moved more in the direction of directly comparing different domains and offering a few top charities culled across all domains. Even though they seem to have been off in their plan, they were directionally correct compared to what existed. They were already consolidating different causes within the same broad category. For instance, they write (GiveWell business plan, p. 21):

     

    A charity that focuses on fighting malaria and a charity that focuses on fighting tuberculosis are largely aiming for the same end goal – preventing death – and if one were clearly better at preventing death than the other, it would be reasonable to declare it a better use of funds. By contrast, a charity that focuses on creating economic opportunity has a fundamentally different end goal. It may be theoretically possible to put jobs created and lives saved in the same terms (and there have been some attempts to create metrics that do so), but ultimately different donors are going to have very different perspectives on whether it’s more worthwhile to create a certain number of jobs or prevent a certain number of deaths.

  • GiveWell doesn't predict clearly enough that it will evolve into a more "foundation"-like entity. Note that at the time of the business plan, they were envisionining themselves as deriving their negotiating power with nonprofits through their role as grantmakers. They then transformed into deriving their power largely from their role as recommenders of top charities. Then, around 2012, following the collaboration with Good Ventures, they switched back to grantmaker mode, but in a far grander way than they'd originally envisaged.
  • At the time of the GiveWell business plan, they see their main source of money moved being small donors. In recent years, as they moved to more "foundation"-like behavior, they seem to have started shifting attention to influencing the giving decisions of larger donors. This might be purely due to the unpredictable fact that they joined hands with the Good Ventures foundation, rather than due to any systemic or predictable reasons. It remains to be seen whether they influence more donations by very large donors in the future. Another aspect of this is that GiveWell's original business plan was more ambitious about influencing the large number of small donors out there than (I think) GiveWell is now.
  • GiveWell seems to have moved away from a focus on examining individual charities to understanding the landscape sufficiently well to directly identify the best opportunities, and then to comparing broad causes. The GiveWell business plan, on the other hand, repeatedly talked about "pitting charities against each other" (p. 11) as their main focal activity. In recent years, however, GiveWell has started stepping back and concentrating more on using their big picture understanding of the realm to more efficiently identify the very best opportunities rather than evaluating all relevant charities and causes. This is reflected in their conversation notes as well as the GiveWell Labs initiative. After creating GiveWell Labs, they have shifted more in the direction of thinking at the level of causes rather than individual interventions.

The role of other factors in GiveWell's success

Was GiveWell destined to succeed, or did it get lucky? I believe a mix of both: GiveWell was bound to succeed in some measure, but a number of chance factors played a role in its achieving success to its current level. A recent blog post by GiveWell titled Our work on outreach contains some relevant evidence. The one single person who may have been key to GiveWell's success is the ethicist and philosopher Peter Singer. Singer is a passionate advocate of the idea that people are morally obligated to donate money to help the world's poorest people. Singer played a major role in GiveWell's success in the following ways:

  • Singer both encouraged people to give and directed people interested in giving to GiveWell's website when they asked him where they should give.
  • Singer was an inspiration for many effective giving organizations. He is credited as an inspiration by Oxford ethicist Toby Ord and his wife physician Bernadette Young, who together started Giving What We Can, a society promoting effective giving. Giving What We Can used GiveWell's research for its own recommendations and pointed people to the website. In addition, Singer's book The Life You Can Save also inspired the creation of the eponymous organization. Giving What We Can was a starting point for related organizations in the nascent effective altruism movement, including 80000 Hours, the umbrella group The Centre for Effective Altruism, and many other resources.
  • Cari Tuna and her husband (and Facebook co-founder) Dustin Moskovitz read about GiveWell in The Life You Can Save by Peter Singer around the same time they met Holden through a mutual friend. Good Ventures, the foundation set up by Tuna and Moskovitz has donated several million dollars to GiveWell's recommended charities (over 9 million USD in 2013) and the organizations have collaborated somewhat. More in this blog post by Cari Tuna.

The connection of GiveWell to the LessWrong community might also have been important, though less so than Peter Singer. It could have been due to the efforts of a few people interested in GiveWell who discussed it on LessWrong. Jonah Sinick's LessWrong posts about GiveWell (mentioned in GiveWell's post about their work on outreach) are an example (full disclosure: Jonah Sinick is collaborating with me on Cognito Mentoring). Note that although only about 3% of donations made through GiveWell are explicitly attributable to LessWrong, GiveWell has received a lot of intellectual engagement from the LessWrong community and other organizations and individuals connected with the community.

How should the above considerations modify our view of GiveWell's success? I think the key thing GiveWell did correctly was become a canonical go-to reference for where to direct donors on making good giving decisions. By staking out that space early on, they were able to capitalize on Peter Singer. Also, it's not just GiveWell that benefited from Peter Singer — we can also argue that Singer's arguments were made more effective by the existence of GiveWell. The first line of counterargument to Singer's claim is that most charities aren't cost-effective. Singer's being able to point to a resource to help identify good charities make people take his argument more seriously.

I think that GiveWell's success at making itself the canonical source was more important than the specifics of their research. But the specifics may have been important in convincing a sufficiently large critical mass of influential people to recommend GiveWell as a canonical source, so the factors are hard to disentangle.

Would something like GiveWell have existed if GiveWell hadn't existed? How would the effective altruism movement be different?

These questions are difficult to explore, and discussing them would take us too far afield. This post on the Effective Altruists Facebook thread offers an interesting discussion. The upshot is that, although Giving What We Can was started two years after GiveWell, people involved with its early history say that the core ideas of looking at cost-effectiveness and recommending the very best places to donate money was mooted before its formal inception, some time around 2006 (when GiveWell had not been formally created). At the time, the people involved were unaware of GiveWell. William MacAskill says that GWWC may have done more work on the cost-effectiveness side if GiveWell wasn't already doing it.

I ran this post by Jonah Sinick and also emailed a draft to the GiveWell staff. I implemented some of their suggestions, and am grateful to them for taking the time to comment on my draft. Any responsibility for errors, omissions, and misrepresentations is solely mine.

Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth?

13 VipulNaik 11 April 2014 07:07PM

Warning: This is a somewhat long-winding post with a number of loosely related thoughts and no single, cogent thesis. I have included a TL;DR after the introduction, listing the main points. All corrections and suggestions are greatly appreciated.

It's commonly known, particularly to LessWrong readers, that in the world of computer-related technology, key metrics have been doubling fairly quickly, with doubling times ranging from 1 to 3 years for most metrics. The most famous paradigmatic example is Moore's law, which predicts that the number of transistors on integrated circuits doubles approximately every two years. The law itself stood up quite well until about 2005, but broke down after that (see here for a detailed overview of the breakdown by Sebastian Nickel). Another similar proposed law is Kryder's law, which looks at the doubling of hard disk storage capacity. Chapters 2 and 3 of Ray Kurzweil's book The Singularity is Near goes into detail regarding the technological acceleration (for an assessment of Kurzweil's prediction track record, see here).

One of the key questions facing futurists, including those who want to investigate the Singularity, is the question of whether such exponential-ish growth will continue for long enough for the Singularity to be achieved. Some other reasonable possibilities:

  • Growth will continue for a fairly long time, but slow down to a linear pace and therefore we don't have to worry about the Singularity for a very long time.
  • Growth will continue but converge to an asymptotic value (well below the singularity threshold)beyond which improvements aren't possible. Therefore, growth will progressively slow down but still continue as we come closer and closer to the asymptotic value
  • Growth will come to a halt, because there is insufficient demand at the margin for improvement in the technology.

Ray Kurzweil strongly adheres to the exponential-ish growth model, at least for the duration necessary to reach computers that are thousands of times as powerful as humanity (that's what he calls the Singularity). He argues that although individual paradigms (such as Moore's law) eventually run out of steam, new paradigms tend to replace them. In the context of computational speed, efficiency, and compactness, he mentions nanotechnology, 3D computing, DNA computing, quantum computing, and a few other possibilities as candidates for what might take over once Moore's law is exhausted for good.

Intuitively, I've found the assumption of continued exponential growth wrong. I'll hasten to add that I'm mathematically literate and so it's certainly not the case that I fail to appreciate the nature of exponential growth — in fact, I believe my skepticism is rooted in the fact that I do understand exponential growth. I do think the issue is worth investigating, both from the angle of whether the continued improvements are technologically feasible, and from the angle of whether there will be sufficient incentives for people to invest in achieving the breakthroughs. In this post, I'll go over the economics side of it, though I'll include some technology-side considerations to provide context.

TL;DR

I'll make the following general points:

  1. The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves.
  2. Industries based on knowledge goods exhibit experience curve effects: what matters is cumulative demand rather than demand in a given time interval. The irreversibility of creating knowledge goods creates a dynamic different from that in other industries.
  3. What matters for technological progress is what people investing in research think future demand will be like. Bubbles might actually be beneficial if they help lay the groundwork of investment that is helpful for many years to come, even though the investment wasn't rational for individual investors.
  4. Each stage of investment requires a large enough number of people with just the right level of willingness to pay (see the PS for more). A diverse market, with people at various intermediate stages of willingness to pay, is crucial for supporting a technology through its stages of progress.
  5. The technological challenges confronted at improving price-performance tradeoffs may differ for the high, low, and middle parts of the market for a given product. The more similar these challenges, the faster progress is likely to be (because the same research helps with all the market segments together).
  6. The demand-side story most consistent with exponential technological progress is one where people's desire for improvement in the technologies they are using are proportional to the current level of the technologies. But this story seems inconsistent with the facts: people's appetite for improvement probably declines once technologies get good enough. This creates problems for the economic incentive side of the exponential growth story.
  7. Some exponential growth stories require a number of technologies to progress in tandem. Progress in one technology helps facilitate demand for another complementary technology in this story. Such progress scenarios are highly conjunctive, and it is likely that actual progress will fall far short of projected exponential growth.

#1: Short versus long run for supply and demand

In the short run, supply curves are upward-sloping and demand curves are downward-sloping. In particular, this means that when the demand curve expands (more people wanting to buy the item at the same price) then that causes an increase in price and increase in quantity traded (rising demands creates shortages at the current price, motivating suppliers to increase supplies and also charge more money given the competition between buyers). Similarly, if the supply curve expands (more amount of the stuff getting produced at the same price) then that causes a decrease in price and increase in quantity traded. These are robust empirical observations that form the bread and butter of micreconomics, and they're likely true in most industries.

In the long run, however, things become different because people can reallocate their fixed costs. The more important the allocation of fixed costs is to determining the short-run supply curve, the greater the difference between short-run supply curves based on choices of fixed cost allocation. And in particular, if there are increasing returns to scale on fixed costs (for instance, a factory that produces a million widgets costs less than 1000 times a factory that produces a thousand widgets) and fixed costs contribute a large fraction of production costs, then the long-run supply curve might end up be downward-sloping. An industry where the long-run supply curve is downward-sloping is called a decreasing cost industry (see here and here for more). (My original version of this para was incorrect; see CoItInn's comment and my response below it for more).

#2: Introducing technology, the arrow of time, and experience curves

The typical explanation for why some industries are decreasing cost industries is the fixed costs of investment in infrastructure that scale sublinearly with the amount produced. For instance, running ten flights from New York to Chicago costs less than ten times as much as running one flight might. This could be because the ten flights can share some common resources such as airport facilities or even airplanes, and also they can offer backups for one another in case of flight cancellations and overbooking. The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year. A mass transit system for a city of a million people costs less than 100 times as much as a mass transit system for a city of the same area with 10,000 people. These explanations for decreasing cost have only a moderate level of time-directionality. When I talk of time-directionality, I am  thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up. If demand for New York-Chicago flights reverts to low levels, the planes can be diverted to other routes. If demand for hard drives falls, the factory producing them can (at some refurbishing cost) produce flash memory or chips or something totally different. As for intra-city mass transit systems, some are easier to repurpose than others: buses can be sold, and physical train cars can be sold, but the rail lines are harder to repurpose. In all cases, there is some time-directionality, but not a lot.

Technology, particularly the knowledge component thereof, is probably an exception of sorts. Knowledge, once created, is very cheap to store, and very hard to destroy in exchange for other knowledge. Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.

This sort of time-directionality is closely related to (though not the same as) the idea of experience curve effects: instead of looking at the quantity demanded or supplied per unit time in a given time period, it's more important to consider the cumulative quantity produced and sold, and the economies of scale arise with respect to this cumulative quantity. Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.

The concept of price skimming is also potentially relevant.

#3: The genie out of the bottle, and gaining from bubbles

The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high. This depends on the technology not getting lost if the suppliers go out of business — but that's probably likely, given that suppliers are unlikely to want to destroy cost-lowering technologies. Even if they go out of business, they'll probably sell the technology to somebody who is still in business (after all, selling their technology for a profit might be their main way of recouping some of the costs of their investment). Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest. In particular, the tech bubble of 1998-2001 spurred significant investments in Internet infrastructure (based on false premises) as well as in the semiconductor industry, permanently lowering the prices of these, and facilitating the next generation of technological development. However, the argument also ignores the fact that the resources spent on the technological development could instead have gone to other even more valuable technological developments. That's a big omission, and probably destroys the case entirely, except for rare situations where some technologies have huge long-term spillovers despite insufficient short-term demand for a rational for-profit investor to justify investment in the technology.

#4: The importance of market diversity and the importance of intermediate milestones being valuable

The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting: for instance, if production costs are $10 per piece and 30 people are willing to pay $100 per piece, then pushing production down from $10 to $5 per piece yields a net gain of only $150 — a pittance compared to the existing profit of $2700. On the other hand, if there are 300 people willing to pay $10 per piece, existing profit is zero whereas the profit arising from reducing the cost to $5 per piece is $1500. On the third hand, people willing to pay only $1 per piece are useless in terms of spurring investment to reduce the price to $5, since they won't buy it anyway.

Building on the preceding point, the market segment that plays the most critical role in pushing the frontier of technology can change as the technology improves. Initially, when prices are too high, the segment that pushes technology further would be the small high-paying elite (the early adopters). As prices fall, the market segment that plays the most critical role becomes less elite and less willing to pay. In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do. Also, market segments for whom the technology is still too expensive are also benefiting in terms of future expectations. Poor people who couldn't afford mobile phones in 1994 benefited from the rich people who generated demand for the phones in 1994, and the middle-income people who generated demand for the phones in 2004, so that now, in 2014, the phones are cost-effective for many of the poor people.

It becomes clear from the above that the continued operation of technological progress depends on the continued expansion of the market into segments that are progressively larger and willing to pay less. Note that the new populations don't have to be different from the old ones — it could happen that the earlier population has a sea change in expectations and demands more from the same suppliers. But it seems like the effect would be greater if the population size expanded and the willingness to pay declined in a genuine sense (see the PS). Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge (as in the example above, going from customers willing to pay $100 to customers willing to pay $1 would require too much investment in research and development and may not be supported by the market). You need people at each intermediate stage to spur successive stages of investment.

A  closely related point is that even though improving a technology by a huge factor (such as 1000X) could yield huge gains that would, on paper, justify the cost of investment, the costs in question may be too large and the uncertainty may be too high to justify the investment. What would make it worthwhile is if intermediate milestones were profitable. This is related to the point about gradual expansion of the market from a small number of buyers with high willingness to pay to a large number of buyers with low willingness to pay.

In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.

#5: Different market segments may face different technological challenges

There are two ends at which technological improvement may occur: the frontier end (of the highest capacity or performance that's available commercially) and the low-cost end (the lowest cost at which something useful is available). To some extent, progress at either end helps with the other, but the relationship isn't perfect. The low-cost end caters to a larger mass of low-paying customers and the high-cost end caters to a smaller number of higher-paying customers. If progress on either end complements the other, that creates a larger demand for technological progress on the whole, with each market segment freeriding off the other. If, on the other hand, progress at the two ends requires distinct sets of technological innovations, then overall progress is likely to be slower.

In some cases, we can identify more than two market segments based on cost, and the technological challenge for each market segment differs.

Consider the case of USB flash drives. We can broadly classify the market into three segments:

  • At the high end, there are 1 TB USB 3.0 flash drives worth $3000. These may appeal to power users who like to transfer or back up movies and videos using USB drives regularly.
  • In the middle (which is what most customers in the First World, and their equivalents elsewhere in the world, would consider) are flash drives in the 16-128 GB range with prices ranging from $10-100. These are typically used to transfer documents and install softwares, with the occasional transfer of a movie.
  • At the "low" end are flash drives with 4 GB or less of storage space. These are sometimes ordered in bulk for organizations and distributed to individual members. They may be used by people who are highly cash-constrained (so that even a $10 cost is too much) and don't anticipate needing to transfer huge files over a USB flash drive.

The cost challenges in the three market segments differ:

  • At the high end, the challenges of miniaturization of the design dominate.
  • At the middle, NAND flash memory is a critical determinant of costs.
  • At the low end, the critical factor determining cost is the fixed costs of production, including the costs of packaging. Reducing these costs would presumably involve lowering the fixed costs of production, including cheaper, more automated, more efficient packaging.

Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle. Note also that the definitions of the low, middle, and high end are themselves subject to change. Ten years ago, there wasn't really a low or high end (more on this in the historical anecdote below). More recently, some disk space values have moved from the high end to the middle end, and others have moved from the middle end to the low end.

#6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story?

Most of the discussion of laws such as Moore's law and Kryder's law focus on the question of technological feasibility. But demand-side considerations matter, because that's what motivates investments in these technologies. In particular, we might ask: to what extent do people value continued improvements in processing speed, memory, and hard disk space, directly or indirectly?

The answer most consistent with exponential growth is that whatever level you are currently at, you pine for having more in a fixed proportion to what you currently have. For instance, for hard disk space, one theory could be that if you can buy x GB of hard disk space for $1, you'd be really satisfied only with 3x GB of hard disk space for $1, and that this relationship will continue to hold whatever the value of x. This model relates to exponential growth because it means that the incentives for proportional improvement remain constant with time. It doesn't imply exponential growth (we still have to consider technological hurdles) but it does take care of the demand side. On the other hand, if the model were false, it wouldn't falsify exponential growth, but it should make us more skeptical of claims that exponential growth will continue to be robustly supported by market incentives.

How close is the proportional desire model to the reality? I think it's a bad description. I will take a couple of examples to illustrate.

  • Hard disk space: When I started using computers in the 1990s, I worked on a computer with a hard disk size of 270 MB (that included space for the operating system). The hard disk really did get full just with ordinary documents and spreadsheets and a few games played on monochrome screens — no MP3s, no photos, no videos, no books stored as PDFs, and minimal Internet browsing support. When I bought a computer in 2007, it had 120 GB (105 GB accessible) and when I bought a computer last year, it had 500 GB (450 GB accessible). I can say quite categorically that the experiences are qualitatively different. I no longer have to think about disk space considerations when downloading PDFs, books, or music — but keeping hard disk copies of movies and videos might still give me pause in the aggregate. I actually downloaded an offline version of Wikipedia for 10 GB, something that gave me only a small amount of pause with regards to disk space requirements. Do I clamor for an even larger hard disk? Given that I like to store videos and movies and offline Wikipedia, I'd be happy if the next computer I buy (maybe 7-10 years down the line?) had a few terabytes of storage. But the issue lacks anything like the urgency that running out of disk space had back in the day. I probably wouldn't be willing to pay much for improvements in disk space at the margin. And I'm probably at the "use more disk space" extreme of the spectrum — many of my friends have machines with 120 GB hard drives and are nowhere near close to running out of it. Basically, the strong demand imperative that existed in the past for improving  hard drive capacity no longer exists (here's a Facebook discussion I initiated on the subject).
  • USB flash drives: In 2005, I bought a 128 MB USB flash drive for about $50 USD. At the time, things like Dropbox didn't exist, and the Internet wasn't too reliable, so USB flash drives were the best way of both backing and transferring stuff. I would often come close to running out of space on my flash drive just to transfer essential items. In 2012, I bought two 32 GB USB flash drives for a total cost of $32 USD. I used one of them to back up all my documents plus a number of my favorite movies, and still had a few GB to spare. The flash drives do prove inadequate for transferring large numbers of videos and movies, but those are niche needs that most people don't have. It's not clear to me that people would be willing to pay more for a 1 TB USB flash drive (a few friends I polled on Facebook listed reservation prices for a 1 TB USB flash drive ranging from $45 to $85. Currently, $85 is the approximate price of 128 GB USB flash drives; here's the Facebook discussion). At the same time, it's not clear that lowering the cost of production for the 32 GB USB flash drive would significantly increase the number of people who would buy that. On either end, therefore, the incentives for innovation seem low.

#7: Complementary innovation and high conjunctivity of the progress scenario

The discussion of the hard disk and USB flash drive examples suggests one way to rescue the proportional desire and exponential growth views. Namely, the problem isn't with people's desires not growing fast enough, it's with complementary innovations not happening fast enough. In this view, maybe in processor speed improved dramatically, new applications enabled by that would revive the demand for extra hard disk space and NAND flash memory. Possibilities in this direction include highly redundant backup systems (including peer-to-peer backup), extensive internal logging of activity (so that any accidental changes can be easily located and undone),  extensive offline caching of websites (so that temporary lack of connectivity has minimal impact on browsing experience), and applications that rely on large hard disk caching to complement memory for better performance.

This rescues continued exponential growth, but at a high price: we now need to make sure that a number of different technologies are progressing simultaneously. Any one of these technologies slowing down can cause demand for the others to flag. The growth scenario becomes highly conjunctive (you need a lot of particular things to happen simultaneously), and it's highly unlikely to remain reliably exponential over the long run.

I personally think there's some truth to the complementary innovation story, but I think the flagging of demand in absolute terms is also an important component of the story. In other words, even if home processors did get a lot faster, it's not clear that the creative applications this would enable would have enough of a demand to spur innovation in other sectors. And even if that's true at the current margin, I'm not sure how long it will remain true.

This blog post was written in connection with contract work I am doing for the Machine Intelligence Research Institute, but repreesents my own views and has not been vetted by MIRI. I'd like to thank Luke Muehlhauser (MIRI director) for spurring my interest in the subject, Jonah Sinick and Sebastian Nickel for helpful discussions on related matters, and my Facebook friends who commented on the posts I've linked to above.

Comments and suggestions are greatly appreciated.

PS: In the discussion of different market sectors, I argued that the presence of larger populations with lower willingness to pay might be crucial in creating market incentives to further improve a technology. It's worth emphasizing here that the absolute size of the incentive depends on the population more than the willingness to pay. To reduce the product cost from $10 to $5, the profit from a population of 300 people willing to pay at least $10 is $1500, regardless of the precise amount they are willing to pay. But as an empirical matter, accessing larger populations requires going to lower levels of willingness to pay (that's what it means to say that demand curves slope downward). Moreover, the nature of current distribution of disposable wealth (as well as willingness to experiment with technology) around the world is such that the increase in population size is huge as we go down the rung of willingness to pay. Finally, the proportional gain from reducing production costs is higher from populations with lower willingness to pay, and proportional gains might often be better proxies of the incentives to invest than absolute gains.

I made some minor edits to the TL;DR, replacing "downward-sloping demand curves" with "downward-sloping supply curves" and replacing "technological progress" with "exponential technological progress". Apologies for not having proofread the TL;DR carefully before.

[Requesting advice] Problems with optimizing my life as a high school student

12 Optimal 14 April 2014 01:07PM

I am writing this because I believe I need advice and direction from people who can understand my problems. This is my first post on Less Wrong, and I am new to practicing serious writing/rationality in general, so please alert me if I have made any glaring mistakes in this text or in my decisions/beliefs. I will begin by describing myself and my situation.

(This article turned out a lot longer than I thought it would, and it might be hard to follow as a result. I urge you to skim through it once, regarding the first sentence of each paragraph, before reading it in full.)

I am a 16 year old male currently enrolled in an online high school that will remain nameless. My story will be very familiar for most of you: I want to help ensure that the invention of self-improving AI will benefit humanity (and myself, particularly), and I am devoting my entire life to this single goal. This is only possible because I am in a highly favorable position, having a safe home, loving family, secure financial support, internet access, and a tremendous amount of unrestrained free time.

My free time is the result of my relatively undemanding online school plus my unrestrictive parents. To give you an idea of how significant it is: for several days, I could do nothing but play video games and look at porn. And I mean nothing: I could rush right through my online lessons, avoid all exercise and sunlight, stay up until 4AM, and have (unhealthy) food brought to my room. Nobody would stop me from maintaining such self-destructive habits. I could go on doing those things for years. And that is exactly what I did, starting when I was age 11 and ending when I was age 15.

For most of the past year, I have been dedicated to overhauling my life, eliminating 'negative' (self-destructive, shortsighted, unproductive) habits and introducing more positive (healthy, considerate of the future, productive) ones. I did this, of course, because I learned about the profound implications of the technological singularity. I decided that I needed to be a healthy, knowledgeable, and productive person to maximize my chances of being able to experience the joys of future technologies. I'm sure that many of you can identify with that sentiment, although I doubt that anyone could have been lazier than me.

The past year was easily the most important year of my life, and will likely remain so for quite a while. As you may have guessed, it was also the most difficult time of my life. The first 5-6 months were particularly painful, mostly because of my severe addiction to internet porn. During that time, I was putting most of my effort into eliminating negative habits. I still added many positive habits, the most prominent being programming, reading (fiction only) offline, exercise, healthy eating, and meditation. Many of my habits fluctuated; I experimented a lot. There was some constant change, however, in the most important habits: average time spent on the computer for entertainment gradually decreased, while time spent on programming and reading increased in turn. 

I would say that I succeeded at overhauling my life. Unfortunately, because my sole goal was 'reduce negative time, increase positive time', my 'positive' time is not nearly as positive as it could be. Sometimes I find myself staring at a programming e-book for an hour or more and learning nothing. Despite its relative ease, schoolwork often causes me to become stressed quickly. I had been practicing mindfulness meditation for 20-40 minutes a day, but I recently reduced and then removed that habit because it almost never helped me. Reading, exercise, and healthy eating were the only habits that always stuck with me no matter how badly I felt.

The most essential habit I built was the habit of tracking my habits. That is, I created a spreadsheet in OpenOffice to keep track of the time I spent on various activities every day. This was a very good thing to do: it motivated me when I was struggling to control my habits, and it now allows me to view my overall progress. These statistics are very helpful in getting a picture of my life and of my habits, so I will provide an abridged/condensed version of the entire spreadsheet collection. For each month, the average time I spent daily on each activity is shown. Numbers in bold indicate highly inaccurate measurements, taken from months wherein I mostly abstained from activity tracking.

(imgur version if it does not display properly)

'Reading offline' means either nonfiction or fiction (it was mostly fiction.) 'Schoolwork' often meant programming assignments. Video games count as leisure computer use. For most of 2013 I only did game programming; this was before I realized that 'AI programming' was more important than 'any programming'. Before recently, I was adding leisure computer use time much too gratuitously: I erroneously categorized it as 'any time spent on computer not covered by other activities'. The statistics for most of 2013 are slightly flawed as a result. All of the recorded daily activity times probably had a margin of error of around 15%. Also, the monthly averages are not good indicators of how I scheduled my activities; in December, for example, I did not play video games for 15-20 minutes every day (having more spaced out longer sessions instead), but my art practice was always 30-80 minutes a day.

Some patterns/trends here are obvious (programming), while others are more random (schoolwork). Programming and reading are obviously the dominant activities in my life. Until late 2013, I only read fiction. For better or worse, I recently realized that reading fiction and practicing art are, from a productivity/time-management perspective, equivalent to playing video games and watching television. I had abstained from activity tracking for most of Jan-Mar as an experiment, but I estimate that I was reading fiction for at least 3 hours every day during most of that period (Kkat is to blame.) This is only slightly odd, because around new years I was starting to focus on maximizing daily programming time, bringing the average up to over 3 hours. If you were wondering just how demanding my online school can be, the 44-min average recorded (over about a week) in January should give you an idea.

As I said before: I have been increasing the time I spend on positive activities, but the activities are not nearly as positive as they could be. I've tried practicing mindfulness many times, in various forms, to increase my productivity and happiness, but I could never consistently get it to work well. I know that quality > quantity here, and that I should study/work mindfully and efficiently instead of simply pouring time into the activity.

I used to put just enough time into productive activities to achieve the set 'daily minimum time' (different for all activities, it was always 40-80 for programming and 15-30 for art) and be satisfied. I don't see it that way now; no matter how much time I put into a productive activity, I can not partake in a 'unproductive' activity without thinking "this time could be used in a more future-benefiting way". This is a big problem, because I am making my leisure time less leisurely and, by pouring time into the productive activities, making them less productive and more stressful. I am also aware of the fact that my present happiness only matters because it increases my productivity/general capability and therefore my chances of experiencing some kind of 'happy singularity'. This makes fun time even more difficult, because I am thinking that I could instead perform my productive activities in a more fun/mindful way, reducing the need for unproductive fun activities.

I recently found an article here that describes, almost exactly, this problem of mine. Reading that nearly blew my mind because I had never explicitly realized the problem before. I quote:

So I'm really not recommending that you try this mindhack. But if you already have spikes of guilt after bouts of escapism, or if you house an arrogant disdain for wasting your time on TV shows, here are a few mantras you can latch on to to help yourself develop a solid hatred of fun (I warn you that these are calibrated for a 14 year old mind and may be somewhat stale):

  • When skiing, partying, or generally having a good time, try remembering that this is exactly the type of thing people should have an opportunity to do after we stop everyone from dying.
  • When doing something transient like watching TV or playing video games, reflect upon how it's not building any skills that are going to make the world a better place, nor really having a lasting impact on the world.
  • Notice that if the world is to be saved then it really does need to be you who saves it, because everybody else is busy skiing, partying, reading fantasy, or dying in third world countries.

(Warning: the following sentences contain opinions.) The worst part is that this seems to be the right thing to do. There is a decent possibility that infinite happiness (or at least, happiness much greater than what could be experienced in a traditional human lifetime) can be experienced via friendly ASI; we should work towards achieving that instead of prioritizing any temporary happiness. But present happiness increases present productivity, so a sort of happiness/productivity balance needs to be struck. Kaj_Sotala, in the comments of the previously linked post, provides a strong argument against hating fun:

The main mechanism here seems to be that guilt not only blocks the relaxation, it also creates negative associations around the productive things - the productivity becomes that nasty uncomfortable reason why you don't get to do fun things, and you flinch away from even thinking about the productive tasks, since thinking about them makes you feel more guilty about not already doing them. Which in turn blocks you from developing a natural motivation to do them.

This feeling is so strong for me because nearly all of my productivity is based on guilt. Especially in the first six months of my productive transformation, I was training myself to feel very guilty when performing negative activities or when failing to perform positive ones. A lot of the time, I only did productive things because I knew I would feel bad if I did otherwise. There was no other way, really; at the time my negative habits were so pronounced that extreme action was required. But my most negative habits are defeated now, and because of my guilt-inducing strategy I cannot find a balance between happiness and productivity. Based on the above quote, the important thing is to make productive activities have a positive mental association. They have negative associations mostly because they are tiring, frustrating, or fruitless, or because they stop you from performing more fun activities.

One apparent solution is to perform all productive tasks mindfully/leisurely and give up unproductive fun activities completely (the most logical choice if human akrasia is not considered.) The other solution is to perform productive tasks mindfully, and have structured, guilt-free periods of leisure time. Based on others' comments here, the second solution is more practical, but I still have a hard time accepting unproductivity and enjoying productivity. My habit of activity tracking makes this worse; I can literally see the 'lost' minutes when I choose to partake in a leisure-time activity.

In the past few weeks, I have been partaking in less leisure time than I ever have before. I only have played video games because other people drag me into them and I am too uncertain to resist, and I always use my designated 'leisure computer use' time in the most 'fun-efficient' way possible (this has been the case for several months.) That means avoiding mind-numbing activities like browsing reddit or 4chan, instead choosing to experience more soulful things that I have always held dear, like music, art, and certain other fantasies. But even then, I feel that I could be doing something more beneficial.

Here is where I need advice and other opinions: how much structured leisure time should I allocate, to achieve the optimal happiness/productivity balance? Would it be practical to attempt to give up structured 'fun time' completely, optimizing productive activities to be more mindful and leisurely? (See activity tracker: I would be able to give up all leisure time, but I would find it much harder to optimize productive time.) How much structured 'fun time' do you think established or upcoming AI researchers regularly allocate, and how does this affect their happiness/productivity balance?

I have established two of my problems: I cannot enjoy fun things and I am not a very good autodidact. I'm not only bad at studying individual topics: I often do not study consistently, glossing over sections or bouncing between books/exercises. I've proven that I definitely learn best by doing, but it's most often hard to find things to do, especially when dealing with more theoretical topics. I'm also never entirely sure of what topics I should be studying. For example: should I read books and take courses about machine learning, or wait until I finish statistics? Should I become competent at competition programming/algorithms before studying cognitive science, or will competition programming skills not even help me at all? Should I not even be asking the above questions, instead just doing everything at once? It's those kinds of questions without answers that make me think that I really don't know what I'm doing, and that college can't come soon enough.

My second request for advice is this: what would you recommend for me to do, to improve my studying habits in the face of uncertainty? How can I choose and maintain a good 'course sequence'? How should I make designated studying time less stressful and more efficient?  Also, based on the averages I provided, should I adjust how much time I am spending on different activities?

And so my main points are concluded. Like I said, I'm not very experienced in rationality, writing, or serious conversation with intelligent people, so I apologize if anything I just said seems erroneous. I do hope that my (perceived) issues can be at least partially resolved as a result of writing this.

I'm not done here, though: I have a few other concerns, these ones about high school and college. My current online school is a favorable learning environment: it is flexible, not overwhelmingly difficult or trivially easy, and easy to exploit when it is sensible to do so. My online schooling provides me with an exceptional degree of freedom; I would never go back to a physical school and give it all up to a broken system. I recently found out about Stanford University Online High School, however, and this challenged my opinion of my current school. My third concern is whether or not I should (attempt to) switch schools. I have good reasons supporting either choice, and I am unsure. I urge you to visit that link to learn about the school if you have not done so already.

Allow me to point out the most important difference: compared to Stanford OHS lessons, my current lessons seem dull and tedious. Stanford OHS lessons are more based on intellectually stimulating and personally engaging activities, in contrast to the more straightforward memorization tasks of (most of) my current school's lessons. At least, this seems to be the case, based on my (probably biased) observations and predictions. I'm not condemning my current school; they are actually trying to get more intellectually stimulating and personally engaging features in, but I can't seem to benefit from any of it. I am about to load up on AP courses, however, which may end up providing more beneficial and engaging work (or just more difficult memorization tasks). Also, enrolling in Stanford OHS would greatly reduce my free time and freedoms when dealing with school, and I might dislike the required video-conferences.

There are other, more defined problems with the Stanford OHS approach. For one, I would need to rush to apply: I would have to take the SAT in less than a month, much earlier than I had originally planned (we've contacted Stanford OHS already, they said that they will allow me to apply after May 1 if I am taking the SAT on May 3.) As a result, I may earn an unsatisfactory grade on the SAT (consider the average scores here). Apparently, they also require recommendations in applications (not very easy to acquire when you're in online school.) Despite those things, I believe I would have a good chance of being accepted, taking into consideration all of my other favorable traits aside from SAT scores or recommendations.

I might be more favored by top colleges if I graduated from the Stanford OHS as opposed to my current school. On the other hand, my capability to self-educate outside of the system will be a hook for colleges, especially if I can complete MOOCs and read college-level textbooks, so perhaps I should maximize free time by staying with my current school. Back on the first hand, I have proven myself to be an inefficient self-educator, so a more structured approach may work better. Either way, after graduating, I am going to apply to the some of the most prominent computer-science programs (no, I'm not going only by that one list). Carnegie Mellon would be my first choice, mostly because of its proximity to home.

And so my last set of questions is formed: Should I attempt to enroll in Stanford OHS? If not, should I indeed be focusing mostly on studying AI-related topics and working on software projects? Either way: assuming I have a >3.7 GPA, >700 SAT scores, and relevant AP courses/tests completed, should I have a decent chance of being accepted to one of the high-ranking computer science colleges?

Well, that will be all for today. If this were any other internet community, I would be very surprised if anyone read the whole thing. Even if I don't receive any helpful answers, I at least gained some writing skill points.

 

 

Beware technological wonderland, or, why text will dominate the future of communication and the Internet

11 VipulNaik 13 April 2014 05:34PM

Disclaimer: The views expressed here are speculative. I don't have a claim to expertise in this area. I welcome pushback and anticipate there's a reasonable chance I'll change my mind in light of new considerations.

One of the interesting ways that many 20th century forecasts made of the future went wrong is that they posited huge physical changes in the way life was organized. For instance, they posited huge changes in these dimensions:

  • The home living arrangements of people. Smart homes and robots were routinely foreseen over time horizons where progress towards those ends would later turn out to be negligible.
  • Overoptimistic as well as overpessimistic scenarios of energy sources merged in strange ways. People believed the world would run out of oil by now, but at the same time envisioned nuclear-powered flight and home electricity.
  • Overoptimistic visions of travel: People thought humans would be sending out regular manned missions to the solar system planets, and space colonization would be on the agenda by now.
  • The types of products that would be manufactured. New products ranging from synthetic meat to room temperature superconductors were routinely prophesied to happen in the near future. Some of them may still happen, but they'll take a lot longer than people had optimistically expected.

At the same time, they underestimated to quite an extent the informational changes in the world:

  • With the exception of forecasters specifically studying computing trends, most missed the dramatic growth of computing and the advent of the Internet and World Wide Web.
  • Most people didn't appreciate the extent of the information and communication revolution and how it would coexist with a world that looked physically indinstinguishable from the world of 30 years ago. Note that I'm looking here at the most advanced First World places, and ignoring the point that many places (particularly in China) have experienced huge physical changes as a result of catch-up growth.

My LessWrong post on megamistakes discusses these themes somewhat in #1 (the technological wonderland and timing point) and #2 (the exceptional case of computing).

What about predictions within the informational realm? I detect a similar bias. It seems that prognosticators and forecasters tend to give undue weight to heavyweight technologies (such as 3D videoconferencing) and ignore the fact that the bulk of the production and innovation has been focused on text (with a little bit in images to augment and interweave with the text), and, to a somewhat lesser extent, images. In this article, I lay the pro-text position. I don't have high confidence in the views expressed here, and I look forward to critical pushback that changes my mind.

Text: easier to produce

One great thing about text is its lower production costs. To the extent that production is quantitatively little and dominated by a few big players, high-quality video and audio play an important role. But as the Internet "democratizes" content production, it's a lot easier for a lot of people to contribute text than to contribute audio or video content.

Some advantages of text from the creation perspective:

  • It's far easier to edit and refine. This is a particularly big issue because with audio and video, you need to rehearse, do retakes, or do heavy editing in order to make something coherent come out. The barriers to text are lower.
  • It's easier to upload and store. Text takes less space. Uploading it to a network or sending it to a friend takes less bandwidth.
  • People are (rightly or wrongly) less concerned about putting their best foot forward with text. People often spend a lot of time selecting their very best photos, even for low-stakes situations like social networks. With text, they are relatively less inhibited, because no individual piece of text represents them as persons as much as they consider their physical appearance or mannerisms to. This allows people to create a lot more text. Note that Snapchat may be an exception that proves the rule: people flocked to it because its impermanence made them less inhibited about sharing. But its impermanence also means it does not add to the stock of Internet content. And it's still images, not videos.
  • It's easy to copy and paste.
  • As an ergonomic matter, typing all day long, although fatiguing, consumes less energy than talking all day long.
  • Text can be created in fits and bursts. An audio or video needs to be recorded more or less in a continuous sitting.
  • You can't play background music while having a video conversation or recording audio or video content.

Text: easier to consume and share

Text is also easier to consume and share.

  • Standardization of format and display methods makes the consumption experience similar across devices.
  • Low storage and bandwidth costs make it easy to consume over poor Internet connections and on a range of devices.
  • Text can be read at the user's own pace. People who are slow at grasping the content can take time. People who are fast can read very quickly.
  • Text can be copied, pasted, modified, and reshared with relative ease.
  • Text is easier to search (this refers both to searching within a given piece of text and to locating a text based on some part of it or some attributes of it).
  • You can't play background music while consuming audio-based content, but you can do it while consuming text.
  • Text can more easily be translated to other languages.

On the flip side, reading text requires you to have your eyes glued to the screen, which reduces your flexibility of movement. But because you can take breaks at your will, it's not a big issue. Audiobooks do offer the advantage that you can move around (e.g., cook in the kitchen) while listening, and some people who work from home are quite fond of audiobooks for that purpose. In general, the benefits of text seem to outweigh the costs.

Text generates more flow-through effects

Holding willingness to pay on the part of consumers the same, text-based content is likely to generate greater flow-through effects because of its ability to foster more discussion and criticism and to be modified and reused for other purposes. This is related to the point that video and audio consumption on the Internet generally tends to substitute for TV and cinema trips, which are largely pure consumption rather than intermediate steps to further production. Text, on the other hand, has a bigger role in work-related stuff.

Augmented text

When I say that text plays a major role, I don't mean that long ASCII strings are the be-all-and-end-all of computing and the Internet. Rather, more creative and innovative ways of interweaving a richer set of expressive and semantically powerful symbols in text is very important to harnessing its full power. It really is a lot different to read The New York Times in HTML than it would be to read the plain text of the article on a monochrome screen. The presence of hyperlinks, share buttons, the occasional image, sidebars with more related content, etc. add a lot of value.

Consider Facebook posts. These are text-based, but they allow text to be augmented in many ways:

  • Inline weblinks are automatically hyperlinked when you submit the post (though at present it's not possible to edit the anchor text to show something different from the weblink).
  • Hashtags can be used, and link to auto-generated Facebook pages listing recent uses of the hashtag.
  • One can tag friends and Facebook groups and pages, subject to some restrictions. For friends tagged, the anchor text can be shortened to any one word in their name.
  • One can attach links, photos, and files of some types. By default, the first weblink that one uses in the post is automatically attached, though this setting can be overridden. The attached link includes a title, summary, and thumbnail.
  • One can set a location for the post.
  • One can set the timing of publication of a post.
  • Smileys are automatically rendered when the post is published.
  • It's possible to edit the post later and make changes (except to attachments?). People can see the entire edit history.
  • One can promote one's own post at a cost.
  • One can delete the post.
  • One can decide who is allowed to view the post (and also restrict who can comment on the post).
  • One can identify who one is with at the time of posting.
  • One can add a rich set of "verbs" to specify what one is doing.

Consider the actions that people reading the posts can perform:

  • Like the post.
  • Comment on the post. Comments automatically include link previews, and they can also be edited later (with edit histories available). Comments can also be used to share photos.
  • Share the post.
  • Select the option to get notifications on updates (such as further comments) on the post.
  • Like comments on the post.
  • Report posts or mark them as spam.
  • View the edit history of the post and comments.
  • For posts with restrictions on who can view them, see who can view the post.
  • View a list of others who re-shared the post.

If you think about it, this system, although it basically relies on text, has augmented text in a lot of ways with the intent of facilitating more meaningful communication. You may find some of the augmentations of little use to you, but each feature probably has at least a few hundred thousand people who greatly benefit from it. (If nobody uses a feature, Facebook axes it).

I suspect that the world in ten years from now will feature text that is richly augmented relative to how text is now in a similar manner that the text of today is richly augmented compared to what it was back in 2006. Unfortunately, I can't predict any very specific innovations (if I could, I'd be busy programming them, not writing a post on LessWrong). And it might very well be the case that the low-hanging fruit with respect to augmenting text is already taken.

Why didn't all the text augmentation happen at once? None of the augmentations are hard to program in principle. The probable reasons are:

  • Training users: The augmented text features need a loyal userbase that supports and implements them. So each augmentation needs to be introduced gradually in order to give users onboarding time. Even if Facebook in 2006 knew exactly what features they would eventually have in 2014, and even if they could code all the features in 2006, introducing them all at once might scare users because of the dramatic increase in complexity.
  • Deeper insight into what features are actually desirable: One can come up with a huge list of features and augmentations of text that might  in principle be desirable, but only a small fraction of them pass a cost-benefit analysis (where the cost is the increased complexity of user interface). Discovering what features work is often a matter of trial and error.
  • Performance in terms of speed and reliability: Each augmentation adds an extra layer of code, reducing the performance in terms of speed and reliability. As computers and software have gotten faster and more powerful, and the Internet companies' revenue has increased (giving them more leeway to spend more for server space), investments in these have become more worthwhile.
  • Focus on userbase growth: Companies were spending their resources in growing their userbase rather than adding features. Note that this is the main point that is likely to change soon: the userbase is within an order of magnitude of being the whole world population.

Images

Images play an important role along with text. Indeed, websites such as 9GAG rely on images, and others like Buzzfeed heavily mix texts and images.

I think images will continue to grow in importance on the Internet. But the vision of images as it is likely to unfold is probably quite different from the vision as futurists generally envisage. We're not talking of a future dominated by professionally done (or even amateurly done) 16 megapixel photography. Rather, we're talking of images that are used to convey basic information or make a memetic point. Consider that many of the most widely shared images are the standard images for memes. The number of meme images is much smaller than the number of meme pictures. Meme creators just use a standard image and their own contribution is the text at the top and bottom of the meme. Thus, even while the Internet uses images, the production at the margin largely involves text. The picture is scaffolding. Webcomics (I'm personally most familiar with SMBC and XKCD, but there are other more popular ones) are at the more professional end, but they too illustrate a similar point: it's often the value of the ideas being creatively expressed, rather than the realism of the imagery, that delivers value.

One trend that was big in the early days of the Internet, then died down, and now seems to be reviving is the animated GIF. Animated GIFs allow people to convey simple ideas that cannot be captured in still images, without having to create a video. They also use a lot less bandwidth for consumers and web hosts than videos. Again, we see that the future is about economically using simple representations to convey ideas or memes rather than technologically awesome photography.

Quantitative estimates

Here's what Martin Hilbert wrote in How Much Information is There in the "Information Society" (p. 3):

It is interesting to observe that the kind of content has not changed significantly since the analog age: despite the general perception that the digital age is synonymous with the proliferation of media-rich audio and videos, we find that text and still images capture a larger share of the world’s technological memories than before the digital age.5 In the early 1990s, video represented more than 80 % of the world’s information stock (mainly stored in analog VHS cassettes) and audio almost 15 % (audio cassettes and vinyl records). By 2007, the share of video in the world’s storage devices decreased to 60 % and the share of audio to merely 5 %, while text increased from less than 1 % to a staggering 20 % (boosted by the vast amounts of alphanumerical content on internet servers, hard-disks and databases. The multi-media age actually turns out to be an alphanumeric text age, which is good news if you want to make life easy for search engines.

I had come across this quote as part of a preliminary investigation for MIRI into the world's distribution of computation (though I had not highlighted the quote in the investigation since it was relatively less important to the investigation). As another data point, Facebook claims that it needed 700 TB (as of October 2013) to store all the text-based status updates and comments plus relevant semantic information on users that would be indexed by Facebook Graph Search once it was extended to posts and comments. Contrast this with a few petabytes of storage needed for all their photos (see also here), despite the fact that one photo takes up a lot more space than one text-based update.

Beautiful text

The Internet looks a lot more beautiful today than it did ten years ago. Why? Small, incremental changes in the way that text is displayed have played a role. New fonts, new WordPress themes, a new Wikipedia or Facebook layout, all conspire to provide a combination of greater usability and greater aesthetic appeal. Also, as processors and bandwidth have improved, some layouts that may have been impractical earlier have been made possible. The block tile layout for websites has caught on quite a bit, inspired by an attempt to create a unified smooth browsing experience across a range of different devices (from small iPhone screens to large monitors used by programmers and data analysts).

Notice that it's the versatility of text that allowed it to be upgraded. Videos created an old way would have to be redone in order to avail of new display technologies. But since text is stored as text, it can be rendered in a new font easily.

The wonders of machine learning

I've noticed personally, and some friends have remarked to me, that Google Search, GMail, and Facebook have gotten a lot better in recent years in many small incremental ways despite no big leaps in the overall layout and functioning of the services. Facebook shows more relevant ads, makes better friend suggestions, and has a much more relevant news feed. Google Search is scarily good at autocompletion. GMail search is improving at autocompletion too, and the interface continues to improve. Many of these improvements are the results of continuous incremental improvement, but there's some reason to believe that the more recent changes are driven in part by application of the wonders of machine learning (see here and here for instance).

Futurists tend to think of the benefits of machine learning in terms of qualitatively new technologies, such as image recognition, video recognition, object recognition, audio transcription, etc. And these are likely to happen, eventually. But my intuition is that futurists underestimate the proportion of the value from machine learning that is intermediated through improvement in the existing interfaces that people already use (and that high-productivity people use more than average), such as their Facebook news feed or GMail or Google Search.

A place for video

Video will continue to be good for many purposes. The watching of movies will continue to migrate from TV and the cinema hall to the Internet, and the quantity watched may also increase because people have to spend less in money and time costs. Educational and entertainment videos will continue to be watched in increasing numbers. Note that these effects are largely in terms of substitution of one medium, plus a raw increase in quantity, for another rather than paradigm shifts in the nature of people's activities.

Video chatting, through tools such as Skype or Google Talk/Hangouts, will probably continue to grow. These will serve as important complements to text-based communication. People do want to see their friends' faces from time to time, even if they carry out the bulk of their conversation in text. As Internet speeds improve around the world, the trivial inconveniences in the way of video communication will reduce.

But these will not drive the bulk of people's value-added from having computing devices or being connected to the Internet. And they will in particular be an even smaller fraction of the value-added for the most productive people or the activities with maximum flow-through effects. Simply put, video just doesn't deliver higher information per unit bandwidth and human inconvenience.

Progress in video may be similar to progress in memes and animated GIFs: there may be more use of animation to quickly create videos expressing simple ideas. Animated video hasn't taken off yet. Xtranormal shut down. The RSA Animate style made waves in some circles, but hasn't caught on widely. It may be that the code for simple video creation hasn't yet been cracked. Or it may be that if people are bothering to watch video, they might as well watch something that delivers video's unique benefits, and animated video offers little advantage over text, memes, animated GIFs, and webcomics. This remains to be seen. I've also heard of Vine (a service owned by Twitter for sharing very short videos), and that might be another direction for video growth, but I don't know enough about Vine to comment.

What about 3D video?

High definition video has made good progress in relative terms, as cameras, Internet bandwidth, and computer video playing abilities have improved. It'll be increasingly common to watch high definition videos on one's computer screen or (for those who can afford it) on a large flatscreen TV.

What about 3D video? If full-blown 3D video could magically appear all of a sudden with a low-cost implementation for both creators and consumers, I believe it would be a smashing success. In practice, however, the path to getting there would be more tortuous. And the relevant question is whether intermediate milestones in that direction would be rewarding enough to producers and consumers to make the investments worth it. I doubt that they would, which is why it seems to me that, despite the fact that a lot of 3D video stuff is technically feasible today, it will still probably take several decades (I'm guessing at least 20 years, probably more than 30 years) to become one of the standard methods of producing and consuming content. For it to even begin, it's necessary that improvements in hardware continue apace to the point that initial big investments in 3D video start becoming worthwhile. And then, once started, we need an ever-growing market to incentivize successive investments in improving the price-performance tradeoff (see #4 in my earlier article on supply, demand, and technological progress). Note also that there may be a gap of a few years, perhaps even a decade or more, between 3D video becoming mainstream for big budget productions (such as movies) and 3D video being common for Skype or Google Hangouts or their equivalent in the later era.

Fractional value estimates

I recently asked my Facebook friends for their thoughts on the fraction of the value they derived from the Internet that was attributable to the ability to play and download videos. I received some interesting comments there that helped confirm initial aspects of my hypothesis. I would welcome thoughts from LessWrongers on the question.

Thanks to some of my Facebook friends who commented on the thread and offered their thoughts on parts of this draft via private messaging.

Career prospects for physics majors

11 JonahSinick 04 April 2014 03:02AM

Physics is attractive to many highly intellectually capable students, because

  • Physical theories represent pinnacles of human achievement
  • It's intellectually stimulating 
  • It has a reputation for being a subject that smart people do

See the comments on the post What attracts smart and curious young people to physics?

But what of career prospects?

In an answer to the Quora question What is it like to major in physics? PhD physicist Joshua Parks wrote:

It may not be too crazy to claim that as far as career options go, physics majors may be much more like English or other humanities majors (who often make career choices unrelated to their study) than their science and engineering counterparts.

At Physics Forums, ParticleGrl wrote

If you are an engineer, you can almost certainly get a job in a technical field right out of college. Physics majors, on the other hand, end up all over the place (insurance, finance, teaching high school, programming, etc). 

We discuss some career paths for physics majors below.

Summary

  • The primary reason to major in physics (outside of intrinsic interest) is as a prerequisite to a physics PhD or as background for teaching high school physics.
  • Over 50% of those who get PhDs in physics don't become physicists, often because of difficulty finding jobs.
  • Physics majors are able to get jobs in other quantitative fields, but often with more difficulty than they would had they majored in those fields.

continue reading »

How can Cognito Mentoring do the most good?

11 JonahSinick 23 March 2014 06:06PM

In late December 2013, I announced that Vipul Naik and I had launched Cognito Mentoring, an advising service for intellectually curious young people.

Vipul Naik and I are aspiring effective altruists, and we started Cognito with a view toward doing the most good. We've learned a lot over the past 3 months, and are working on planning what to do next. We'd be very grateful for any feedback on current thinking, which I've described below.

continue reading »

Bostrom versus Transcendence

10 Stuart_Armstrong 18 April 2014 08:31AM

A summary and broad points of agreement and disagreement with Cal Newport's book on high school extracurriculars

10 VipulNaik 08 April 2014 01:55AM

Cal Newport (personal website, Wikipedia page) is a moderately well-known author of four books as well as a computer science researcher. I have read two of his four books: How To Become a Straight-A Student The Unconventional Strategies Real College Students Use to Score High While Studying Less and How to Be a High School Superstar: A Revolutionary Plan to Get into College by Standing Out (Without Burning Out). I'm particularly interested in his book on becoming a high school superstar. My interest arises as part of trying to figure out how people can better use their extracurricular activities to have more fun, learn more, and create more value for the world. As Jonah recently pointed out, choosing high school extracurricular activities could in principle have huge social value in addition to the private benefits. And as far as I know, Cal Newport is the only person who has given systematic advice on high school extracurriculars to a broad audience. He's been referenced many times on Less Wrong.

In this post, I'll briefly discuss his suggestions in the latter book and some of my broad philosophical disagreements. I'm eager to know about the experiences of people who've tried to implement Newport's advice (particularly that pertaining to extracurriculars, but also any of his other advice). First impressions of people who click through the links and read about Newport right now would also be appreciated. I intend to write on some of these issues in more detail in the coming days, though those later posts of mine will not be focused solely on what Newport has to say.

You might also be interested in the comments on this Facebook post of mine discussing Newport's ideas.

A quick summary of Newport's views

Newport's book advises high school students to pick an extracurricular activity and shine at it to the level that it impresses admissions officers (and others). He offers a three-step plan for highschoolers:

  1. The Law of Underscheduling: Pack your schedule with free time. Use this free time to explore: In particular, avoid getting being involved in too many activities, whether academic or extracurricular. Use your free time to read and learn about a wide range of stuff.
  2. The Law of Focus: Master one serious interest. Don't waste time on unrelated activities: Newport cites the superstar effect and the Matthew effect to bolster his case for focusing on one activity after you've explored a reasonable amount.
  3. The Law of Innovation: Pursue accomplishments that are hard to explain, not hard to do: Newport talked of a "failed-simulation effect" where things seem impressive if the people who hear about them can't easily imagine a standard path to them. He then offers some more guidelines both on how to innovate and on how to make one's innovation seem impressive.

Newport is targeting high school students who want to get into their dream college. He's trying to get them to stop doing boring, depressing activities and instead do fun, creative, and useful stuff that both improves their short-run life (by making them more relaxed and less stressed) and impresses admissions officers.

Broad areas of agreement

  1. I think Newport is right to suggest that it doesn't make sense to devote too much energy to boring schoolwork or extracurriculars that one is doing just because one is "supposed" to do them. I think he's right that his approach is both less stressful and less wasteful of human resources and effort. And it is more likely, in expectation, to build human capital and produce direct value for society.
  2. Newport is correct to emphasize the link between free time and being able to explore stuff, and his advice on how to explore can be quite helpful to high school students.
  3. Newport's ideas for how to focus on a particular interest, and how to rack up accomplishments in a particular area, seem broadly sound.
  4. When it comes to figuring out what impresses college admissions officers, Newport seems like he knows what he's talking about, although some of his examples make less sense than he thinks they do.

Broad philosophical differences

Before getting into the nuts and bolts of what I think Newport gets right and wrong, I want to talk of some broad differences between Newport (as he presents himself) and me. A few things I find somewhat jarring in Newport's writing:

  1. Newport seems very concerned with signaling quality to colleges. This is fine: that's what his target audience cares most about, and if getting into a good college is important, then signaling quality to college can be quite important. What I find somewhat offputting is that he often confuses the signaling with the value of the activity itself, or at any rate fails to question whether some of the things he believes to be optimal from the signaling viewpoint could be counterproductive from the perspective of value creation (either personal or social). For instance, consider his observation of the existence of the failed-simulation effect. This points in favor both of picking things that are harder for other people to "see through" (rather than things that are straightforward but hard) and also in favor of making what you did seem more undoable than it actually is. I see these as downsides of the failed-simulation effect, and sources of genuine conflict between choosing what creates the most value (personal or social) and what impresses others. Newport seems to sidestep such dilemmas.
  2. Newport doesn't adequately address the zero-sum context in which he is giving his advice. Top colleges have a limited number of places for students. If everybody successfully implemented Newport's advice, only a small fraction of them would be able to go to a top college. Note that I don't think Newport views his advice as zero-sum, and even if what I wrote above is correct, his advice could still be positive-sum in that it shifts people away from competing on stressful dimensions to doing activities that offer them more fun and learning and create more value. But again, the fact that he doesn't really address this issue head-on is a disappointment.
  3. Newport seems to oversystematize in ways that don't feel right to me. Even though I agree with aspects of the broad direction he is pushing people in, I feel he's seeing too many patterns that may not exist.
  4. In general, I feel that Newport doesn't go far enough. He operates within the standard set of constraints without questioning the logic of the enterprise or giving people a better understanding of the incentives of different actors in the system. He also doesn't provide adequate guidance on the self-calibration problem, and doesn't adequately encourage people to figure out how to calibrate their learning better in the context of the extracurricular activity where they cannot rely on standard measures such as grades to track their progress.

I'm curious to know what readers' main areas of disagreement with Newport are, and/or whether my listed areas of disagreement make sense to readers.

Cross-posted to Quora and the Cognito Mentoring blog.

SHRDLU, understanding, anthropomorphisation and hindsight bias

10 Stuart_Armstrong 07 April 2014 09:59AM

EDIT: Since I didn't make it sufficiently clear, the point of this post was to illustrate how the GOFAI people could have got so much wrong and yet still be confident in their beliefs, by looking at what the results of one experiment - SHRDLU - must have felt like to those developers at the time. The post is partially to help avoid hindsight bias: it was not obvious that they were going wrong at the time.

 

SHRDLU was an early natural language understanding computer program, developed by Terry Winograd at MIT in 1968–1970. It was a program that moved objects in a simulated world and could respond to instructions on how to do so. It caused great optimism in AI research, giving the impression that a solution to natural language parsing and understanding were just around the corner. Symbolic manipulation seemed poised to finally deliver a proper AI.

Before dismissing this confidence as hopelessly naive (which it wasn't) and completely incorrect (which it was), take a look at some of the output that SHRDLU produced, when instructed by someone to act within its simulated world:

continue reading »

Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

9 Dr_Manhattan 21 April 2014 04:55PM

http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

Very surprised none has linked to this yet:

TL;DR: AI is a very underfunded existential risk.

Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.

 

European Community Weekend in Berlin Impressions Thread

9 Gunnar_Zarncke 13 April 2014 08:33PM

The European Community Weekend in Berlin is over and was a full sucess.

This is no report of the event but a place where you can e.g. comment on the event, link to photos or what else you want to share.

I'm not the organizer of the Meetup but I have been there and for me it was a great event. Meeting many energetic, compassionate and in general awesome, people. Great presentations and workshops. And a very awesome positive athmosphere.

Cheers to all participants!

Gunnar

PS. I get it that there will be an upload of the presentations by the organizers and maybe some report of the results some time later. Those may or may not be linked from this post.


Regret, Hindsight Bias and First-Person Experience

8 Stabilizer 20 April 2014 02:10AM

Here is an experience that I often have: I'm walking down the street, perfectly content and all of a sudden some memory pops into my stream of consciousness. The memory triggers some past circumstance where I did not act completely admirably. Immediately following this, there is often regret. Regret of the form like: "I should've studied harder for that class", "I should've researched my options better before choosing my college", "I should've asked that girl out", "I shouldn't have been such an asshole to her" and so on. So this is regret which is of the kind: "Well, of course, I should've done X. But I did Y. And now here I am."

This is classic hindsight bias. Looking back into the past, it seems clear what my course of action should've been. But it wasn't at all that clear in the past.

So, I've come up with a technique to attenuate this kind of hindsight-bias driven regret.

First of all, tune in to your current experience. What is it like to be here, right here and right now, doing the things you're doing. Start zooming out: think about the future and what you're going to be doing tomorrow, next week, next month, next year, 5 years later. Is it at all clear what choices you should make? Sure, you have some hints: take care of your health, save money, maybe work harder at your job. But nothing very specific. Tune in to the difficulties of carrying out even definitely good things. You told yourself that you'd definitely go running today, but you didn't. In first-person mode, it is really hard to know what do, to know how to do it and to actually do it. 

Now, think back to the person you were in the past, when you made the choices that you're regretting. Try to imagine the particular place and time when you made that choice. Try to feel into what it was like. Try to color in the details: the ambient lighting of the room, the clothes you and others were wearing, the sounds and the smells. Try to feel into what was going on in your mind. Usually it turns out that you were confused and pulled in many different directions and, all said and done, you had to make a choice and you made one.

Now realize that back then you were facing exactly the kinds of uncertainties and confusions you are feeling now. In the first-person view there are no certainties; there are only half-baked ideas, hunches, gut feelings, mish-mash theories floating in your head, fragments of things you read and heard in different places.

Now think back to the regrettable decision you made. Is it fair to hold that decision against yourself which such moral force? 

The failed simulation effect and its implications for the optimization of extracurricular activities

8 VipulNaik 08 April 2014 07:27PM

Cal Newport's book How To Become a Straight-A Student The Unconventional Strategies Real College Students Use to Score High While Studying Less (that I blogged about recently) discusses a concept that Newport calls the failed simulation effect. Newport:

The Failed-Simulation-Effect Hypothesis If you cannot mentally simulate the steps taken by a student to reach an accomplishment, you will experience a feeling of profound impressiveness.

Newport, Cal (2010-07-20). How to Be a High School Superstar: A Revolutionary Plan to Get into College by Standing Out (Without Burning Out) (p. 182). Crown Publishing Group. Kindle Edition.

Newport gives the following example in his book:

Playing in a rock band doesn’t generate the Failed-Simulation Effect. You can easily simulate the steps required for that accomplishment: buy an instrument, take lessons, practice, brood, and so on. There’s no mystery. By contrast, publishing a bestselling book at the age of sixteen defies simulation. “How does a teenager get a book deal?” you ask in wonderment. This failure to simulate generates a sense of awed respect: “He must be something special.”

Newport, Cal (2010-07-20). How to Be a High School Superstar: A Revolutionary Plan to Get into College by Standing Out (Without Burning Out) (pp. 182-183). Crown Publishing Group. Kindle Edition.

On the basis of this insight, Newport's bottom line for people looking for accomplishments in the high school extracurricular realm is:

Pursue accomplishments that are hard to explain, not hard to do.

My impression is that Newport is broadly correct as far as college admissions advice goes: activities that are hard to simulate seem more impressive, and therefore improve one's chances at admission (ceteris paribus). But impressing admissions committees isn't the only goal in life. In this post, I explore the question: how aligned is this advice to the other things that matter, namely, direct personal value (in the form of consumption and human capital) and social value?

Understanding the question

I'm interested in exploring how closely the following are related for a given accomplishment (with the exception of (1) and (2), the rest measure the value created in some respected; see also this page):

  1. Hardness: The amount of skill, effort, or character strength needed.
  2. Impressiveness (primarily in the context of signaling quality to colleges): The degree to which people (particularly college admissions officers) are impressed.
  3. Human capital: The useful knowledge and skills acquired in the context of the accomplishment.
  4. Social value: The benefit to the world from one's accomplishment. Note that social value could be direct or indirect, mediated through later accomplishments that rely on the human capital and networking gains from the activity.
  5. Consumption: The fun or excitement of the accomplishment.
  6. Networking: Getting connected to people in the course of accomplishing.

Newport's insight is that hardness and impressiveness aren't as closely correlated as we might want to believe. But how closely is impressiveness related to the items (3)-(6)? That's what we want to explore here. But first, a bit about how hardness relates to the remaining items. For brevity, we will not discuss (5) and (6) further in the post. Instead, we will concentrate our energies on how (1) and (2) relate with (3) and (4).

Where do hardness and impressiveness differ?

Hardness and impressiveness aren't completely uncorrelated. A few moments of introspection should reveal that that's the case: if impressive things were easy to do, many people would be doing them, and they would cease to be impressive. But Newport's central insight is that hardness and impressiveness aren't as correlated as they seem on the surface. There are things that are quite hard to do but don't seem impressive because they are mainstream and follow a standard path. There are other things that seem more impressive than their actual hardness warrants.

Consider a 2 X 2 matrix

 Not impressiveImpressive
Not hard Not hard, not impressive (e.g., watching TV) Impressive but not hard (somewhat innovative activities, the sort that Newport wants to encourage more of)
Hard Hard but not impressive (e.g., reaching ranked third in high school academics, learning a difficult musical instrument) Hard and impressive (e.g., becoming a really really good music player, getting a medal in a national math or sports contest)

Note that the "hard but not impressive" characterization is relative. Being ranked third in high school academics is impressive. But it's a lot less impressive to elite colleges (the colleges that Newport's audience wants to get  into) relative to the amount of effort it takes to achieve. Similarly, learning a difficult musical instrument is somewhat impressive, but not as impressive as it is hard.

We restrict attention in this post to hard activities that people seriously consider doing, rather than random hard stuff people may do for dares or bets (like staying up for 100 hours at a stretch).

Newport wants to shift people from the "not impressive" column to the "impressive" column, and notes that there are plenty of activities in the top right quadrant.

What qualitative attributes characterize activities in the top right quadrant, and their very opposite, namely, activities in the bottom left quadrant? Some observations (based on Newport):

  1. Standard versus nonstandard: Activities that a lot of people are already doing don't seem that impressive, even if they are hard. And some relatively hard activities are a standard part of people's academic and extracurricular experience. Learning AP BC calculus and writing a rudimentary mobile app may be of roughly equal hardness. But a lot of people are doing the former since it is part of the standard path. Learning how to play the violin may be about as hard as doing research in a marine biology lab. But the former is a relatively standard extracurricular activity that many people do because it's what they are supposed to do, or because their parents force them to do it.
  2. Outward-facing: Things that seem like they serve larger numbers of people seem impressive. Mastering one's learning of a subject is less impressive than doing something that reaches out to many people. But this could have more to do with the "Convincing people" point I make below.
  3. Convincing people: Activities that involve changing other people's minds seem prima facie more impressive. Learning the violin doesn't require convincing anybody of anything. You just sit down and learn (or take lessons). Getting somebody to publish your book, on the other hand, requires convincing a publisher that your book is worth publishing. And getting people to buy the book requires convincing buyers that the book is worth buying. Creating an online community or successful marketing/lobbying for a nonprofit both have the flavor of convincing people. It makes sense that having convinced people is a convenient indicator of having done something impressive. In a sense, the evaluation of impressiveness has been outsourced to the other people already convinced.
  4. Discrete original projects: I'm not too sure of this, but people do seem to be somewhat biased in favor of discrete, distinctive projects with clearly identified names or a distinctiveness of identity. "I created a popular website with 1000 pages of information about topic X" sounds more impressive than "I wrote 1000 Quora answers about X" even if it's the case that the latter activity generates more pageviews in the long run.

Hardness, impressiveness, and human capital

Now that we've identified some general points of divergence between hardness and impressiveness, we can consider the question: how do hardness and impressiveness differ in terms of the extent to which they correlate with human capital acquisition (i.e., the acquisition of knowledge and skills that have long-term utility)? As before, we restrict attention to hard activities that people seriously consider doing, rather than random hard stuff people may do for dares or bets (like staying up for 100 hours at a stretch). Let's look at the four potential sources of divergence and compare based on those:

  1. Standard versus nonstandard: The "hard but not impressive" cluster comprises the more standard activities, whereas the "not hard but impressive" cluster comprises more nonstandard activities. So, this consideration boils down the question to: do standard activities produce more human capital than nonstandard activities? My answer is (very guardedly) mildly in favor of standard activities. Although much of school learning is wasteful, the standard subjects still have the benefit of several years of curriculum development that provides a certain bare minimum of quality. Nonstandard stuff exhibits higher variance. I suspect that the typical nonstandard activity is worse for building human capital than the typical standard activity. But I also think there's more scope for doing really well on the human capital end by picking a really good nonstandard activity. Another consideration in favor of nonstandard is that there's a large supply of people who can do the standard stuff, so that the marginal value of adding another person with standard skills is high, whereas the nonstandard stuff could involve building rare, specialized skills.
  2. Outward-facing: My guess is that at the high school level, the most high-value activities (from the human capital perspective) tend to involve learning about the world (not limited to what's in school syllabi) rather than creating products. This isn't a hard-and-fast rule, but it does point in the direction of impressive activities being less valuable from the human capital perspective than hard activities.
  3. Convincing people: This argues in favor of impressiveness. The skill of convincing people is an important one, and the act of convincing people also requires one to do a better job overall with presentation and background knowledge. This is good preparation for later life, where one needs to often suggest new things and convince people of them.
  4. Discrete original projects: Impressiveness favors discrete original projects. I think this is an argument in favor of impressiveness being better at building human capital, but a very weak one. People acquire valuable skills in the process of creating their own original projects that they wouldn't when contributing to existing projects (for instance, creating your own website means you have to learn about website creation and getting traffic). On the other hand, participating in existing projects makes it easier to calibrate your learning, get feedback, and improve.

Hardness, impressiveness, and (direct) social value

How do the "impressive but not hard" activities compare with the "hard but not impressive" activities in terms of the
direct value they produce for society? We'll do a point-by-point comparison similar to that for human capital, but first, a little digression.

Although many hard activities are not valuable, it is almost always the case that valuable activities are at least somewhat hard. The logic is similar to the logic for hard and impressive activities described earlier in the post. Namely, if valuable activities were easy to do, they would already have been done to the extent where they either became hard at the margin or lost value at the margin.

PayPal co-founder Max Levchin credits this insight to co-founder Peter Thiel (see here). Levchin recounts that, back when PayPal was in its infancy, he was enamored by the idea of using elliptic curve cryptography to speed up some aspects of PayPal's secure transactions. Elliptic curve cryptography uses some pretty cool math and offers interesting implementation challenges. But it turned out that the speedup offered wasn't really helpful with the things that PayPal needed to do. Levchin learned from Thiel that hardness isn't the source of value. On the other hand, things that are valuable are almost always bound to be hard, because if they were easy, they'd have already been accomplished. Indeed, Levchin's new company, named Hard Valuable Fun, builds on this insight.

As before, we restrict attention to hard activities that people seriously consider doing, rather than random hard stuff people may do for dares or bets (like staying up for 100 hours at a stretch). Now, let's compare hard and impressive activities in terms of their social value:

  1. Standard versus nonstandard: The "hard but not impressive" cluster comprises the more standard activities, whereas the "not hard but impressive" cluster comprises more nonstandard activities. So, this consideration boils down the question to: do standard activities produce more direct social value than nonstandard activities? I think the general answer is a resounding no. Standard activities are largely focused on building human capital or signaling quality (to colleges and others), rather than on the creation of direct social value. This is true even for standard extracurriculars, such as learning musical instruments. Even the standard extracurriculars billed as socially useful, such as volunteer work by US students in Columbia, often produce negligible social value (see Jonah's post on volunteering for a more in-depth discussion). Note that the indirect social value created through human capital acquisition might still be huge for some activities that build human capital, but that is not what we're trying to assess in this part of the post. Nonstandard activities exhibit higher variance, but could at least in principle be chosen for higher social value. Another consideration in favor of nonstandard is that there's a large supply of people who can do the standard stuff, so that the marginal value of getting something nonstandard accomplished may be higher on account of more low-hanging fruit.
  2. Outward-facing: Impressive activities tend to be outward-facing. And creating direct social value generally requires being at least somewhat outward-facing. So, this consideration points in favor of impressiveness over hardness.
  3. Convincing people: This argues in favor of impressiveness. Creating positive change usually requires convincing people at some level. This could be direct suasion, or it could be attracting people to visit one's website or buy one's book or use one's products in some other capacity.
  4. Discrete original projects: Impressiveness favors discrete original projects. I think this is an argument in favor of impressiveness being better at creating direct social value, but a very weak one, and there are many counterexamples. Creating your own website may seem more impressive than just writing a bunch of Quora answers, but the latter may get read a lot more.

Below, I summarize what I've said about hardness, impressiveness, human capital, and direct social value:

ConsiderationHuman capital consideration points in favor of hardness or impressiveness?Direct social value consideration points in favor of hardness or impressiveness?
Standard versus nonstandard Hardness (but weak) Impressiveness (but weak)
Outward-facing Hardness (but weak) Impressiveness
Convincing people Impressiveness Impressiveness
Discrete original projects Impressiveness (but very weak)

Impressiveness (but very weak)

Overall, it seems that a shift towards impressiveness would perform better in terms of direct social value and slightly worse in terms of human capital. But the variation between different choices of activities overwhelms the general comparison of hardness and impressiveness. In other words, there are probably a lot of activities within the impressive category (at varying levels of hardness) that perform well on the human capital and direct social value dimensions. One just needs to be know to look for them.

Any thoughts on the above would be appreciated.

PS: I'm planning to do another post (or posts) on how people in high school and early college, or others in a similar age group, can select side projects and execute them well.

LW Australia Weekend Retreat

8 Ruby 07 April 2014 09:45AM

EDIT: The Mega-Meetup has been scheduled! http://lesswrong.com/meetups/z8  Registration: http://goo.gl/425hyo


The organisers of Less Wrong Melbourne, Less Wrong Sydney, and Less Wrong Canberra had a meeting last night. We're pretty stoked to announce plans for a LESS WRONG AUSTRALIA MEGA-EVENT

WHAT Weekend Retreat
WHERE: Near Sydney
WHEN: May 9-11, Friday night - Sunday night
COST: $200-$250*

*Food and accommodation.

This is for those who like improving their rationality and effectiveness, being surrounded by others who do so too, making new friends, socialising, enjoying the outdoors, and adventure!

Anticipate sessions on rationality skills, revision and teaching of CFAR modules, prediction markets, nature walks, lightning talks, board games, tasty food, and all round enlightenment.

If you're in Australia but are yet to get involved with a local Less Wrong meetup, now is a great time to start.

Register your interest here

We're aiming to have an incredible program for the weekend and would immensely appreciate anyone sharing their ideas, resources, experience or advice for making an utterly awesome rationality weekend retreat.

 

 

 

Unfriendly Natural Intelligence

7 Gunnar_Zarncke 15 April 2014 05:05AM

Related to: UFAIPaperclip maximizerReason as memetic immune disorder

A discussion with Stefan (cheers, didn't get your email, please message me) during the European Community Weekend Berlin fleshed out an idea I had toyed around with for some time:

If a UFAI can wreak havoc by driving simple goals to extremes then so should driving human desires to extremes cause problems. And we should already see this. 

Actually we do. 

We know that just following our instincts on eating (sugar, fat) is unhealthy. We know that stimulating our pleasure centers more or less directly (drugs) is dangerous. We know that playing certain games can lead to comparable addiction. And the recognition of this has led to a large number of more or less fine-tuned anti-memes e.g. dieting, early drug prevention, helplines. These memes steering us away from such behaviors were selected for because they provided aggregate benefits to the (members of) social (sub) systems they are present in.     

Many of these memes have become so self-evident we don't recognize them as such. Some are essential parts of highly complex social systems. What is the general pattern? Did we catch all the critical cases? Are the existing memes well-suited for the task?How are they related. Many are probably deeply woven into our culture and traditions.

Did we miss any anti-memes? 

This last question really is at the core of this post. I think we lack some necessary memes keeping new exploitations of our desires in check. Some new ones result from our society a) having developed the capacity to exploit them and b) the scientific knowledge to know how to do this.

continue reading »

How relevant are the lessons from Megamistakes to forecasting today?

7 VipulNaik 12 April 2014 04:53AM

Disclaimer: This post contains unvetted off-the-cuff thoughts. I've included quotes from the book in a separate quote dump post to prevent this post from getting too long. Read the intro and the TL;DR if you want a quick idea of what I'm saying.

As part of a review of the track record of forecasting and the sorts of models used for it, I read the book Megamistakes: Forecasting and the Myth of Rapid Technological Change (1989) by Steven P. Schnaars (here's a review of the book by the Los Angeles Times from back when it was published). I conducted my review in connection with contract work for the Machine Intelligence Research Institute, but the views expressed here are solely mine and have not been vetted by MIRI. Note that this post is not a full review of the book. Instead, it simply discusses some aspects of the book I found relevant.

The book is a critique of past forecasting efforts. The author identifies many problems with these forecasting efforts, and offers suggestions for improvement. But the book was written in 1989, when the Internet was just starting out and the World Wide Web didn't exist. Thus, the book's suggestions and criticisms may be outdated in one or more of these three ways:

  • Some of the suggestions in the book were mistaken, and this has become clearer based on evidence gathered since the publication of the book: I don't think the book was categorically mistaken on any count. The author was careful to hedge appropriately in cases where the evidence wasn't very strongly in a particular direction. But point #1 below is in the direction of the author not giving appropriate weight to a particular aspect of his analysis.
  • Some of the suggestions or criticisms in the book don't apply today because the sorts of predictions being made today are of a different nature: We'll argue this to be the case in #2 below.
  • Some of the suggestions in the book are already implemented routinely by forecasters today, so they don't make sense as criticisms even though they continue to be valid guidelines. We'll argue this to be the case in #3 below.

I haven't been able to locate any recent work of the author where he assesses his own work in light of new evidence; if any readers can find such material, please link to it in the comments.

TL;DR

  1. A number of the technologies that Schnaars notes were predicted to happen before 1989 and didn't, have in fact happened since then. This doesn't contradict anything Schnaars wrote. In fact, it agrees with many of his claims. But it does seem to be connotatively different from the message that Schnaars seems to be keen on pushing in the book. It seems that the main issues with many predictions is one of timing, rather than of fundamental flaws in the vision of the future being suggested. For instance, in the realm of concerns about unfriendly AI, it may be that the danger of AGI will be imminent in 2145 AD rather than 2045 AD, but the basic concerns espoused by Yudkowsky could still be right.
  2. Schnaars does note that trends related to computing are the exceptions to technological forecasting being way too optimistic: Computing-related trends seem to him to often be right or only modestly optimistic. In 1989, the exceptional nature of computing may have seemed like only a minor point in a book about many other failed technological forecasts. In 2014, the point is anything but minor. To the extent that there are systematic reasons for computing being different from the other technological realms where Schnaars notes a bad track record of forecasting, his critique isn't too relevant. The one trend that grows exponentially, in line with bullish expectations, will come to dominate the rest eventually. And to the extent that software eats the world, it could spill over into other trends as well.
  3. A lot of the suggestions offered by Schnaars (particularly suggestions on diversification, field testing, and collecting feedback) are routinely implemented by many top companies today, and even more so by the top technology companies. This isn't necessarily because they read him. It's probably largely because it's a lot easier to implement those suggestions in today's world with the Internet.

#1: The criticism of "technological wonderland": it's all about timing, honey!

Schnaars is critical of forecasters for being too enamored with the potential of a technology and replacing hard-nosed realism with wishful thinking based on what they'd like the technology to do. Two important criticisms he makes in this regard are:

  • Forecasters often naively extrapolate price-performance curves, ignoring both economic and technological hurdles.
  • Forecasters often focus more on what is possible rather than what people actually want as consumers. They ignore the fact that new product ideas that sound cool may not deliver enough value to end users to be worth the price tag.

The criticism remains topical today. Futurists today often extrapolate trends such as Moore's law far into the future, to the point where there's considerable uncertainty both surrounding the technological feasibility and the economic incentives. A notable example here is Ray Kurzweil, well-known futurist and author of The Singularity is Near. Kurzweil's prediction record is decidedly mixed. An earlier post of mine included a lengthy discussion of the importance of economic incentives in facilitating technological improvement. I'd drafted that post before reading Megamistakes, and the points I make there aren't too similar to the specific points in the book, but it is in the same general direction.

Schnaars notes, but in my view, gives insufficient emphasis to the following point: Many of the predictions he grades aren't fundamentally misguided at a qualitative level. They're just wrong on timing. In fact, a number of them have been realized in the 25 years since. Some others may be realized over the next 25 years, and yet more may be realized over the next 100 years. And some may be realized centuries from now. What the predictions got wrong was timing, in the following two senses:

  • Due to naive extrapolation of price-performance curves, forecasters underestimate the time needed to attain specific price-performance milestones. For instance, they might think that you'd get a certain kind of technological product for $300 by 1985, but it might actually come to market at that price only in 2005.
  • Because of their own obsession with technology, forecasters overestimate the reservation prices (i.e., the maximum price at which consumers are willing to buy a technological product). Thus, even when a particular price-performance milestone is attained, it fails to lead to the widespread use of the technology that forecasters had estimated.

The gravity you assign to this error depends heavily on the purpose of the forecast. If it's for a company deciding whether to invest a few million dollars in research and development, then being off by a couple of decades is a ruinous proposition. If you're trying to paint a picture of the long term future, on the other hand, a few decades here and there need not be a big deal. Schnaars seems to primarily be addressing the first category.

Schnaars makes the point about timing in more detail here (pp. 120-121) (emphasis mine):

A review of past forecasts for video recorders and microwave ovens illustrates the length of time required for even the most successful innovations to diffuse through a mass market. It also refutes the argument that we live in times of ever faster change. Both products were introduced into commercial markets shortly after World War II. Both took more than twenty years to catch fire in a large market. The revolution was characterized more by a series of fits and starts than by a smooth unfolding pattern. The ways in which they achieved success suggests something other than rapid technological change. Progress was slow, erratic, and never assured. And this applies to two of the most successful innovations of the past few decades! The record for less successful innovations is even less impressive.

The path to success for each of those products was paved with a mix of expected and unexpected events. First, it was widely known that to be successful it was necessary to get costs down. But as costs fell, other factors came into play. Microwave ovens looked as if they were going to take off in the late 1960s, when consumer advocates noted that the ovens leaked radiation when dropped from great heights. The media dropped the "great heights" part of the research, and consumers surmised that they would be purchasing a very dangerous product. Consumers decided to cook with heat for a few years longer. Similarly, the success of video recorders is usually attributed to Sony's entry with Betamax in the mid-1970s. But market entries went on for years with the video recorder. Various interpretations of the product were introduced onto the market throughout the 1960s and 1970s. A review of these entries clearly reveals that the road to success for the VCR was far more rocky than the forecasts implied. Even for successful innovations, which are exceptions to begin with, the timing of market sucess and the broad path the product will follow are often obscured from view.

One example where Schnaars notes that timing is the main issue is that of fax machines (full quote in the quote dump)

Here are some technologies that Schnaars notes as failed predictions, but that have, in the intervening years (189-2014), emerged in roughly the predicted form. Full quotes from the book in the quote dump.

  • Computerphones (now implemented as smartphones, though the original vision was of similar phones as landline phones rather than mobile phones).
  • Picture phones (specifically the AT&T PicturePhone(now implemented as smartphones and also as computers with built-in webcams, though note again that the original vision involved landline phones). See Wikipedia for more.
  • Videotex (an early offering whose functionality is now included in GUI-based browsers accessing the World Wide Web and other Internet services).

An interesting general question that this raises, and that I don't have an offhand answer to, is whether there is a tradeoff between having a clear qualitative imagination about what a technology might look like once matured, and having a realistic sense of what will happen in the next few years. If that's the case, the next question would be what sort of steps the starry-eyed futurist types can take to integrate realistic timing into their vision, and/or how people with a realistic sense of timing can acquire the skill of imagining the future without jeopardizing their realism about the short term.

#2: Computing: the exception that eviscerates the rule?

Schnaars acknowledges computing as the exception (pp. 123-124) (emphasis mine, longer version of quote in the quote dump):

Most growth market forecasts, especially those for technological products, are grossly optimistic. The only industry where such dazzling predictions have consistently come to friution is computers. The technological advances in this industry and the expansion of the market have been nothing short of phenomenal. The computer industry is one of those rare instances, where optimism in forecasting seems to have paid off. Even some of the most boastful predictions have come true. In other industries, such optimistic forecasts would have led to horrendous errors. In computers they came to pass.

[...]

The most fascinating aspect of those predictions is that in almost any other industry they would have turned out to be far too optimistic. Only in the computer industry did perpetual boasting turn out to be accurate forecasting, until the slowdown of the mid-1980s.

The tremendous successes in the computer industry illustrate an important point about growth market forecasting. Accurate forecasts are less dependent on the rate of change than on the consistency and direction of change. Change has been rampant in computers; but it has moved the industry consistently upward. Technological advances have reduced costs, improved performance, and, as a result, expanded the market. In few other industries have prices declined so rapidly, opening up larger and larger markets, for decades. Consequently, even the most optimistic predictions of market growth have been largely correct. In many slower growth industries, change has been slower but has served to whipsaw firms in the industry rather than push the market forward. In growth market forecasting, rapid change in one direction is preferable to smaller erratic changes.

This is about the full extent to which Schnaars discusses the case of computing. His failure to discuss it deeper seems like a curious omission. In particular, I would have been curious to see if he had an explanation for why computing has turned out so different, and whether this was due to the fundamental nature of computing or just a lucky historical accident. Further, to the extent that Schnaars believed that computing was fundamentally different, how did he fail to see the long-run implications in terms of how computing would eventually become a dominating factor in all forms of technological progress?

So what makes computing different? I don't have a strong view, but I think that the general-purpose nature and wide applicability of computing may have been critical. A diverse range of companies and organizations knew that they stood to benefit from the improvement of computing technology. This gave them greater incentives to pool and share larger amounts of resources. Radical predictions, such as Moore's law, were given the status of guidelines for the industry. Moreover, improvements in computing technology affected the backend costs of development, and the new technologies did not have to be sold to end consumers. So end consumers' reluctance to change habits was not a bottleneck to computing progress.

Contrast this with a narrower technology such as picture phones. Picture phones were a separate technology developed by a phone company, whose success heavily depended on what that company's consumers wanted. Whether AT&T succeeded or failed with the picture phone, most other companies and organizations didn't care.

Indeed, when the modern equivalents of picture phones, computerphones, and Videotex finally took off, they did so as small addenda to a thriving low-cost infrastructure of general-purpose computing.

The lessons from Megamistakes suggest that converting the technological fruits of advances into computing into products that consumers use can be a lot more tricky and erratic than simply making advances in computing.

I also think there's a strong possibility that the accuracy of computing forecasts may be declining, and that the problems that Schnaars outlines in his book (namely, consumers not finding the new technology useful) will start biting computing. For more, see my earlier post.

#3: Main suggestions already implemented nowadays?

Some of the suggestions that Schnaars makes on the strategy front are listed in Chapter 11 (Strategic Alternatives to Forecasting) and include:

  1. Robust Strategies: If a firm cannot hope to ascertain what future it will face, it can develop a strategy that is resilient no matter which of many outcomes occurs (p. 163).
  2. Flexible Strategies: Another strategy for dealing with an uncertain future is to remain flexible until the future becomes clearer (p. 165).
  3. Multiple Coverage Strategies: Another alternative to forecasting growth markets is to pursue many projects simultaneously (p. 167).

I think that (2) and (3) in particular have increased a lot in the modern era, and (1) has too, though less obviously. This is particularly true in the software and Internet realm, where one can field-test many different experiments over the Internet. But it's also true for manufacturing, as better point-of-sale information and a supply chain that records information accurately at every stage allows for rapid changes to production processes (cf. just in time manufacturing). The example of clothing retailer Zara is illustrative: they measure fashion trends in real time and change their manufacturing choices in response to these trends. In his book Everything is Obvious: *Once You Know the Answer, Duncan Watts uses the phrase "measure and react" for this sort of strategy.

Other pieces of advice that Schnaars offers, that I think are being followed to a greater extent today than back in his time, partly facilitated by greater information flow and more opportunities for measurement, collaboration, and interaction:

  • Start Small: Indeed, a lot of innovation today is either done by startups or by big companies trying out small field tests of experimental products. It's very rarely the case that a company invests a huge amount in something before shipping or field-testing it. Facebook started out at Harvard in February 2004 and gradually ramped up to a few other universities, and only opened to the general public in September 2006 (see their timeline).
  • Take Lots of Tries: The large numbers of failed startups as well as shelved products in various "labs" of Google, Facebook, and other big companies are testimony to this approach.
  • Enter Big: Once something has been shown to work, the scaling up can be very rapid in today's world, due to rapid information flows. Facebook got to a billion users in under a decade of operation. When they roll out a new feature, they can start small, but once the evidence is in that it's working, they can roll it out to everybody within months.
  • Setting Standards of Uniformity: It's easier than before to publicly collaborate in an open fashion on standards. There are many successful examples that form the infrastructure of the Internet, most of them based on open source technologies. Some recent examples of successful collaborative efforts include Schema.org (between search engines), OpenID (between major Internet email ID providers and other identity providers such as Facebook), Internet.org (between Facebook and cellphone manufacturing companies), and the Open Compute Project.
  • Developing the Necessary Infrastructure: Big data companies preemptively get new data center space before the need for it starts kicking in. Data center space is particularly nice because server power and data storage are needed for practically all their operations, and therefore are agnostic to what specific next steps the companies will take. This fits in with the "Flexible Strategy" idea.
  • Ensuring a Supply of Complementary Products: This isn't uniformly followed, but arguably the most successful companies have followed it. Google expanded into Maps, News, and email long before people were clamoring for it. They got into the phone operating system business with Android and the web browser business with Chrome. Facebook has been more focused on its core business of social networking, but it too has been supporting complementary initiatives such as internet.org to boost global Internet connectivity.
  • Lowering Prices: Schnaars cites the example of Xerox, that sidestepped the problem of the high prices of machines by leasing them instead of selling them. Something similar is done in the context of smartphones today.

Schnaars' closing piece of advice is (p. 183):

Assume that the Future Will Be Similar to the Present

Is this good advice, and are companies and organizations today following it? I think it's both good advice and bad advice. On the one hand, Google was able to succeed with GMail because they correctly forecast that disk space would soon be cheap enough to make GMail economical. In this case, it was their ability to see the future as different from the present that proved to be an asset. Similarly, Paul Graham describes good startup ideas as ones created by people who live in the future rather than the present.

At the same time, the best successes do assume that the future won't look physically too different from the present. And unless there is a strong argument in favor of a particular way in which the future will look different, planning based on the present might be the best one can hope for. GMail wasn't based on a fundamental rethinking of human behavior. It was based on the assumption that most things would remain similar, but Internet connectivity and bandwidth would improve and disk space costs would reduce. Both assumptions were well-grounded in the historical record of technology trends, and both were vindicated by history.

Thanks to Luke Muehlhauser (MIRI director) for recommending the book and to Jonah Sinick for sending me his notes on the book. Neither of them have vetted this post.

Quote dump

To keep the main post short, I'm publishing a dump of relevant quotes from the book separately, in a quote dump post.

 

 

Thermodynamics of Intelligence and Cognitive Enhancement

7 CasioTheSane 03 April 2014 11:17PM

Introduction

Brain energy is often confused with motivation, but these are two distinct phenomena. Brain energy is the actual metabolic energy available to the neurons, in the form of adenosine triphosphate (ATP) molecules. ATP is the "energy currency" of the cell, and is produced primarily by oxidative metabolism of energy from food. High motivation increases the use of this energy, but in the absence of sufficient metabolic capacity it eventually results in stress, depression, and burnout as seen in manic depression. Most attempts at cognitive enhancement only address the motivation side of the equation.

The “smart drug” culture has generally been thinking pharmaceutically rather than biologically. Behind that pharmaceutical orientation there is sometimes the idea that the individual just isn't trying hard enough, or doesn't have quite the right genes to excel mentally.

-Ray Peat, PhD

Cellular Thermodynamics

Any simple major enhancement to human intelligence is a net evolutionary disadvantage.

-Eliezer Yudkowsky (Algernon’s Law)

I propose that this constrain is imposed by the energy cost of intelligence. The conventional textbook view of neurology suggests that much of the brain's energy is "wasted" in overcoming the constant diffusion of ions across the membranes of neurons that aren't actively in use. This is necessary to keep the neurons in a 'ready state' to fire when called upon.

Why haven't we evolved some mechanism to control this massive waste of energy?

The Association-Induction hypothesis formulated by Gilbert Ling is an alternate view of cell function, which suggests a distinct functional role of energy within the cell. I won't review it in detail here, but you can find an easy to understand and comprehensive introduction to this hypothesis in the book "Cells, Gels and the Engines of Life" by Gerald H. Pollack (amazon link). This idea has a long history with considerable experimental evidence, which is too extensive to review in this article.

The Association-Induction hypothesis states that ion exclusion in the cell is maintained by the structural ordering of water within the cytoplasm, by an interaction between the cytoskeletal proteins, water molecules, and ATP. Energy (in the form of ATP) is used to unfold proteins, presenting a regular pattern of surface charges to cell water. This orders the cell water into a 'gel like' phase which excludes specific ions, because their presence within the structure is energetically unfavorable. Other ions are selectively retained, because they are adsorbed to charged sites on protein surfaces. This structured state can be maintained with no additional energy. When a neuron fires, this organization collapses, which releases energy and performs work. The neuron uses significant energy only to restore this structured low entropy state, after the neuron fires. 

This figure (borrowed from Gilbert Ling) summarizes this phenomena, showing a folded protein (on the left) and an unfolded protein creating a low entropy gel (on the right).

 

 

To summarize, maintaining the low entropy living state in a non-firing neuron requires little energy. This implies that the brain may already be very efficient, where nearly all energy is used to function, grow, and adapt rather than pump the same ions 'uphill' over and over.

Cost of Intelligence

To quote Eliezer Yudkowsky again, "the evolutionary reasons for this are so obvious as to be worth belaboring." Mammalian brains may already be nearly as efficient as their physics and structure allows, and any increase in intelligence comes with a corresponding increase in energy demand. Brain energy consumption appears correlated with intelligence across different mammals, and humans have unusually high energy requirements due to our intelligence and brain size. 

Therefore if an organism is going to compete while having a greater intelligence, it must be in a situation where this extra intelligence offers a competitive advantage. Once intelligence is adequate to meet the demands of survival in a given environment, extra intelligence merely imposes unnecessary nutritional requirements.

These thermodynamic realities of intelligence lead to the following corollary to Algernon’s Law:

Any increase in intelligence implies a corresponding increase in brain energy consumption.

Potential Implications

What is called genius is the abundance of life and health.

-Henry David Thoreau

This idea can be applied to both evaluate nootropics, and to understand and treat cognitive problems. It's unlikely that any drug will increase intelligence without adverse effects, unless it also acts to increase energy availability in the brain. From this perspective, we can categorically exclude any nootropic approaches which fail to increase oxidative metabolism in the brain.

This idea shifts the search for nootropics from neurotransmitter like drugs that improve focus and motivation, to those compounds which regulate and support oxidative metabolism such as glucose, thyroid hormones, some steroid hormones, cholesterol, oxygen, carbon dioxide, and enzyme cofactors.

Why haven't we already found that these substances increase intelligence?

Deficiencies in all of these substances do reduce intelligence. Further raising brain metabolism above normal healthy levels should be expected to be a complex problem because of the interrelation between the molecules required to support metabolism:

If you increase oxidative metabolism, the demand for all raw materials of metabolism is correspondingly increased. Any single deficiency poses a bottleneck, and may result in the opposite of the intended result.

So this suggests a 'systems biology' approach to cognitive enhancement. It's necessary to consider how metabolism is regulated, and what substrates it requires. To raise intelligence in a safe and effective way, all of these substrates must have increased availability to the neuron, in appropriate ratios.

I am always leery of drawing analogies between brains and computers but this approach to cognitive enhancement is very loosely analogous to over-clocking a CPU. Over-clocking requires raising both the clock rate, and the energy availability (voltage). In the case of the brain, the effective 'clock rate' is controlled by hormones (primarily triiodothyronine aka T3), and energy availability is provided by glucose and other nutrients.

It's not clear if merely raising brain metabolism in this way will actually result in a corresponding increase in intelligence, however I think it's unlikely that the opposite is possible (increasing intelligence without raising brain metabolism).

Rationalist fiction: a Slice of Life IN HELL

7 Ritalin 25 March 2014 05:02PM

"If you're sent to Hell for that, you wouldn't have liked it in Heaven anyway." 

This phrase inspired in me the idea of a Slice of Life IN HELL story. Basically, the strictest interpretation of the Abrahamic God turns out to be true, and, after Judgment Day, all the sinners (again, by the strictest standards), the pagans, the atheists, the gays, the heretics and so on end up in Hell, which is to say, most of humanity. Rather than a Fire and Brimstone torture chamber, this Hell is very much like earthly life, except it runs on Murphy's Law turned Up To Eleven ("everything that can go wrong, will go wrong"), and you can't die permanently, and it goes on forever. It's basically Life as a videogame, set to Maximum Difficulty, and real pain and suffering.

Our stories would focus actually decent, sympathetic people, who are there for things like following the wrong religion, or having sex outside missionary-man-on-woman, lack of observance of the daily little rituals, or even just being lazy. They manage to live more-or-less decently because they're extremely cautious, rational, and methodical. Given that reality is out to get them, this is a constant uphill battle, and even the slightest negligence can have a terrible cost. Thankfully, they have all the time in eternity to learn from their mistakes.

This could be an interesting way to showcase rationalist principles, especially those regarding safety and planning, in a perpetual Worst Case Scenario environment. There's ample potential for constant conflict, and sympathetic characters whom the audience can feel they really didn't deserve their fate. The central concept also seems classically strong to me: defying Status Quo and cruel authorities by striving to be as excellent as one can be, even in the face of certain doom.

What do you guys think? There's lots of little details to specify, and there are many things that I believe should be marked as "must NOT be specified". Any help, ideas, thoughts are very welcome.

LSD, Meditation, Enlightenment, and Ego Death

6 Fink 20 April 2014 07:41PM

A little background information first, I'm a computer science/neuroscience dual-major in my junior year of university. AGI is what I really want to work on and I'm especially interested in Gortzel's OpenCog. Unfortunately I do not have nearly the understanding of the human mind I would like, let alone the knowledge of how to make a new one.

DavidM's post on meditation is particularly interesting to me. I've been practicing mindfulness-based meditation techniques for some time now and I've seen some solid results but the concept of 'enlightenment' was always appealing to me, and I've always wanted to know if such a thing existed. I have been practicing his technique for a few weeks now and although it is difficult I believe I understand what he means by 'vibrations' in your attentional focus.

I've experimented with psilocybin mushrooms for about a year now. Mostly for fun, sometimes for better understanding my own brain. Light doses have enhanced my perception and led me to re-evaluate my life from a different perspective, although I am never as clear-headed as I would like.

I've read that LSD provides a 'cleaner' experience while avoiding some of the thought-loops of mushrooms, it also lasts much longer. Stanislav Grof once said that LSD can be to psychology what the microscope is to biology, with deep introspection we can view our thoughts coalesce. After months of looking for a reliable producer and several 'look-alike' drugs I finally obtained a few doses of LSD. Satisfied that it was the real thing I took a single dose and fell into my standard meditation session, trying to keep my concentration on the breath.

I experienced what wikipedia calls 'ego death'. That is I felt my 'self' splitting into the individual sub-components that formed consciousness. Acid is well-known for causing synaesthesia and as I fell deeper into meditation I felt like I could actually see the way sensory experiences interacted with cognitive heuristics and rose to the level of conscious perception. I felt that I could what see 'I' really was, what Douglas Hofstadter referred to as a 'strange loop' looking back on itself, with my perception switching between sensory input, memories, and thought patterns resonating in frequency with DavidM's 'vibrations'. Of course I was under the effects of an hallucinogenic drug, but I felt my experience was quite lucid.

DavidM hasn't posted in years which is a shame because I really want to see his third article and ask him more about it. I will continue practicing his enlightenment meditation techniques in an attempt to try to foster these experiences without the use of drugs. Has anyone here had experiences with psychedelic drugs or transcendental meditation? If so, could you tell me about them?

0.5% of amazon purchases to a charity of your choice (opt-in)

6 Jonathan_Graehl 02 April 2014 01:55AM

'MIRI' works in the search field when electing a charity to get 0.5% of your https://smile.amazon.com purchases.

Book Review: Kazdin's The Everyday Parenting Toolkit

6 Gunnar_Zarncke 31 March 2014 09:29PM

This is a review of The Everyday Parenting Toolkit: The Kazdin Method for Easy, Step-by-step, Lasting Change for You and Your Child by Alan E, Kazdin (all phrases in quotes below are from this book if not otherwise indicated). I was pointed to this book by tadamsmars comment on Ignorance in Parenting

This is a post in the sequence about parenting. I also see some cross relations to learning and cognitive sciences in general. Kazdins advice also is not only applicable to children but to adults as well if you read the book with a mind open to the backing research (Kazdin actually gives some such examples to illustrate the methods).

Summary TD;DR

Define the positive behavior you do want. Communicate this clearly and provide events that make it likely to occur. Praise any occurrence of the positive behavior effusively. Think about and communicate consequences beforehand. Use mild and short punishments (if at all). Provide a healthy environment.

continue reading »

How valuable is volunteering?

6 JonahSinick 29 March 2014 09:55PM

This essay was written for high school and college students who are considering volunteering. I'm interested in finding high social value activities for high school and college students to engage in, and would be grateful for any suggestions.

High school and college students are often just starting to think about how to make a difference and improve the world. One salient option available to them is volunteering. How valuable is volunteering? 

One way in which volunteering can be valuable is that it can be enjoyable. This is the primary motivation of some volunteers. Another way in which volunteering can be valuable is that it can build skills. Building skills is valuable to the extent that you need them later on. As an example, working on an open source software project is often cited as a good way of developing programming skills. 

What of the direct social value of volunteering to others? There are many factors that cut against volunteering having social value to others in general:

continue reading »

Open thread, 24-30 March 2014

6 Metus 25 March 2014 07:42AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Duration set to six days to encourage Monday as first day.

View more: Next