Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Evaluating GiveWell as a startup idea based on Paul Graham's philosophy

11 VipulNaik 12 April 2014 02:04PM

Effective altruism is a growing movement, and a number of organizations (mostly foundations and nonprofits) have been started in the domain. One of the very first of these organizations, and arguably the most successful and influential, has been charity evaluator GiveWell. In this blog post, I examine the early history of GiveWell and see what factors in this early history helped foster its success.

My main information source is GiveWell's original business plan (PDF, 86 pages). I'll simply refer to this as the "GiveWell business plan" later in the post and will not link to the source each time. If you're interested in what the GiveWell website looked like at the time, you can browse the website as of early May 2007 here.

To provide more context to GiveWell's business plan, I will look at it in light of Paul Graham's pathbreaking article How to Get Startup Ideas. The advice here is targeted at early stage startups. GiveWell doesn't quite fit the "for-profit startup" mold, but GiveWell in its early stages was a nonprofit startup of sorts. Thus, it would be illustrative to see just how closely GiveWell's choices were in line with Paul Graham's advice.

There's one obvious way that this analysis is flawed and inconclusive: I do not systematically compare GiveWell with other organizations. There is no "control group" and no possibility of isolating individual aspects that predicted success. I intend to write additional posts later on the origins of other effective altruist organizations, after which a more fruitful comparison can be attempted. I think it's still useful to start with one organization and understand it thoroughly. But keep this limitation in mind before drawing any firm conclusions, or believing that I have drawn firm conclusions.

The idea: working on a real problem that one faces at a personal level, is acutely familiar with, is of deep interest to a (small) set of people right now, and could eventually be of interest to many people

Graham writes (emphasis mine):

The very best startup ideas tend to have three things in common: they're something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

[...]

When a startup launches, there have to be at least some users who really need what they're making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you're making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can't expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that's broad but shallow, or one that's narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they'll use it even when it's a crappy version one made by a two-person startup they've never heard of? If you can't answer that, the idea is probably bad. [3]

You don't need the narrowness of the well per se. It's depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it's a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it's not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

GiveWell in its early history seems like a perfect example of this:

  • Real problem experienced personally: The problem of figuring out how and where to donate money was a personal problem that the founders experienced firsthand as customers, so they knew there was a demand for something like GiveWell.
  • Of deep interest to some people: The people who started GiveWell had a few friends who were in a similar situation: they wanted to know where best to donate money, but did not have enough resources to do a full-fledged investigation. The number of such people may have been small, but since these people were intending to donate money in the thousands of dollars, there were enough of them who had deep interest in GiveWell's offerings.
  • Could eventually be of interest to many people: Norms around evidence and effectiveness could change gradually as more people started identifying as effective altruists. So, there was a plausible story for how GiveWell might eventually influence a large number of donors across the range from small donors to billionaires.

Quoting from the GiveWell business plan (pp. 3-7, footnotes removed; bold face in original):

GiveWell started with a simple question: where should I donate?

We wanted to give. We could afford to give. And we had no prior commitments to any particular charity; we were just looking for the channel through which our donations could help people (reduce suffering; increase opportunity) as much as possible.

The first step was to survey our options. We found that we had more than we could reasonably explore comprehensively. There are 2,625 public charities in the U.S. with annual budgets over $100 million, 88,812 with annual budgets over $1 million. Restricting ourselves to the areas of health, education (excluding universities), and human services, there are 480 with annual budgets over $100 million, 50,505 with annual budgets over $1 million.

We couldn’t explore them all, but we wanted to find as many as possible that fit our broad goal of helping people, and ask two simple questions: what they do with donors’ money, and what evidence exists that their activities help people?

Existing online donor resources, such as Charity Navigator, give only basic financial data and short, broad mission statements (provided by the charities and unedited). To the extent they provide metrics, they are generally based on extremely simplified, problematic assumptions, most notably the assumption that the less a charity spends on administrative expenses, the better. These resources could not begin to help us with our questions, and they weren’t even very useful in narrowing the field (for example, even if we assumed Charity Navigator’s metrics to be viable, there are 1,277 total charities with the highest possible rating, 562 in the areas of health, education and human services).

We scoured the Internet, but couldn’t find the answers to our questions either through charities’ own websites or through the foundations that fund them. It became clear to us that answering these questions was going to be a lot of work. We formed GiveWell as a formal commitment to doing this work, and to putting everything we found on a public website so other donors wouldn’t have to repeat what we did. Each of the eight of us chose a problem of interest (malaria, microfinance, diarrheal disease, etc.) – this was necessary in order to narrow our scope – and started to evaluate charities that addressed the problem.

[...]

We immediately found that there are enormous opportunities to help people, but no consensus whatsoever on how to do it best. [...]

Realizing that we were trying to make complex decisions, we called charities and questioned them thoroughly. We wanted to see what our money was literally being spent on, and for charities with multiple programs and regions of focus we wanted to know how much of their budget was devoted to each. We wanted to see statistics – or failing that, stories – about people
who’d benefited from these programs, so we could begin to figure out what charities were pursuing the best strategies. But when we pushed for these things, charities could not provide them.

They responded with surprise (telling us they rarely get questions as detailed as ours, even from multi-million dollar donors) and even suspicion (one executive from a large organization accused Holden of running a scam, though he wouldn’t explain what sort of scam can be run using information about a charity’s budget and activities). See Appendix A for details of these exchanges. What we saw led us to conclude that charities were neither accustomed to nor capable of answering our basic questions: what do you do, and what is the evidence that it works?

This is why we are starting the Clear Fund, the world’s first completely transparent charitable grantmaker. It’s not because we were looking for a venture to start; everyone involved with this project likes his/her current job. Rather, the Clear Fund comes simply from a need for a resource that doesn’t exist: an information source to help donors direct their money to where it will accomplish the most good.

We feel that the questions necessary to decide between charities aren’t being answered or, largely, asked. Foundations often focus on new projects and innovations, as opposed to scaling up proven ways of helping people; and even when they do evaluate the latter, they do not make what they find available to foster dialogue or help other donors (see Appendix D for more on this). Meanwhile, charities compete for individual contributions in many ways, from marketing campaigns to personal connections, but not through comparison of their answers to our two basic questions. Public scrutiny, transparency, and competition of charities’ actual abilities to improve the world is thus practically nonexistent. That makes us worry about the quality of their operations – as we would for any set of businesses that doesn’t compete on quality – and without good operations, a charity is just throwing money at a problem.

[...]

With money and persistence, we believe we can get the answers to our questions – or at least establish the extent to which different charities are capable of answering them. If we succeed, the tremendous amount of money available for solving the world’s problems will become better spent, and the world will reap enormous benefits. We believe our project will accomplish the following:
1. Help individual donors find the best charities to give to. [...]

2. Foster competition to find the best ways of improving the world. [...]

3. Foster global dialogue between everyone interested – both amateur and professional –
in the best tactics for improving the world.
[...]

4. Increase engagement and participation in charitable causes. [...]

All of the benefits above fall under the same general principle. The Clear Fund will put a new focus on the strategies – as opposed to the funds – being used to attack the world’s problems.

How do you know if the idea is scalable? You just gotta be the right person

We already quoted above GiveWell's reasons for believing that their idea could eventually influence a large volume of donations. But how could we know at the time whether their beliefs were reasonable? Graham writes (emphasis mine):

How do you tell whether there's a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can't. The founders of Airbnb didn't realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn't foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That's probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it's obvious from the beginning when there's a path out of the initial niche. And sometimes I can see a path that's not immediately obvious; that's one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can't predict whether there's a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you're the right sort of person, you have the right sort of hunches. If you're at the leading edge of a field that's changing fast, when you have a hunch that something is worth doing, you're more likely to be right.

How well does GiveWell fare in terms of the potential of the people involved? Were the people who founded GiveWell (specifically Holden Karnofsky and Elie Hassenfeld) the "right sort of person" to found GiveWell? It's hard to give an honest answer that's not clouded by information available in hindsight. But let's try. On the one hand, neither of the co-founders had direct experience working with nonprofits. However, they had both worked in finance and the analytical skills they employed in the financial industry may have been helpful when they switched to analyzing evidence and organizations in the nonprofit sector (see the "Our qualifications" section of the GiveWell business plan). Arguably, this was more relevant to what they wanted to do with GiveWell than direct experience with the nonprofit world. Overall, it's hard to say (without the benefits of hindsight or inside information about the founders) that the founders were uniquely positioned, but the outside view indicators seem generally favorable.

Post facto, there seems to be some evidence that GiveWell's founders exhibited good aesthetic discernment. But this is based on GiveWell's success, so invoking that as a reason is a circular argument.

Schlep blindness?

In a different essay titled Schlep Blindness, Graham writes:

There are great startup ideas lying around unexploited right under our noses. One reason we don't see them is a phenomenon I call schlep blindness. Schlep was originally a Yiddish word but has passed into general use in the US. It means a tedious, unpleasant task.

[...]

One of the many things we do at Y Combinator is teach hackers about the inevitability of schleps. No, you can't start a startup by just writing code. I remember going through this realization myself. There was a point in 1995 when I was still trying to convince myself I could start a company by just writing code. But I soon learned from experience that schleps are not merely inevitable, but pretty much what business consists of. A company is defined by the schleps it will undertake. And schleps should be dealt with the same way you'd deal with a cold swimming pool: just jump in. Which is not to say you should seek out unpleasant work per se, but that you should never shrink from it if it's on the path to something great.

[...]

How do you overcome schlep blindness? Frankly, the most valuable antidote to schlep blindness is probably ignorance. Most successful founders would probably say that if they'd known when they were starting their company about the obstacles they'd have to overcome, they might never have started it. Maybe that's one reason the most successful startups of all so often have young founders.

In practice the founders grow with the problems. But no one seems able to foresee that, not even older, more experienced founders. So the reason younger founders have an advantage is that they make two mistakes that cancel each other out. They don't know how much they can grow, but they also don't know how much they'll need to. Older founders only make the first mistake.

It could be argued that schlep blindness was the reason nobody else had started GiveWell before GiveWell. Most people weren't even thinking of doing something like this because the idea seemed like so much work that nobody went near it. Why then did GiveWell's founders select the idea? There's no evidence to suggest that Graham's "ignorance" remedy was the reason. Rather, the GiveWell business plan explicitly embraces complexity. In fact, one of their early section titles is Big Problems with Complex Solutions. It seems like the GiveWell founders found challenge more exciting than deterring. Lack of intimate knowledge with the nonprofit sector might have been a factor, but it probably wasn't a driving one.

Competition

Graham writes:

Because a good idea should seem obvious, when you have one you'll tend to feel that you're late. Don't let that deter you. Worrying that you're late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you're probably not too late. It's exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don't discard the idea.

If you're uncertain, ask users. The question of whether you're too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

[...]

You don't need to worry about entering a "crowded market" so long as you have a thesis about what everyone else in it is overlooking. In fact that's a very promising starting point. Google was that type of idea. Your thesis has to be more precise than "we're going to make an x that doesn't suck" though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn't have the courage of their convictions, and that your plan is what they'd have done if they'd followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there's demand and that none of the existing solutions are good enough. A startup can't hope to enter a market that's obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Did GiveWell enter a crowded market? As Graham suggests above, it depends heavily on how you define the market. Charity Navigator existed at the time, and GiveWell and Charity Navigator compete to serve certain donor needs. But they are also sufficiently different. Here's what GiveWell said about Charity Navigator in the GiveWell business plan:

Existing online donor resources, such as Charity Navigator, give only basic financial data and short, broad mission statements (provided by the charities and unedited). To the extent they provide metrics, they are generally based on extremely simplified, problematic assumptions, most notably the assumption that the less a charity spends on administrative expenses, the better. These resources could not begin to help us with our questions, and they weren’t even very useful in narrowing the field (for example, even if we assumed Charity Navigator’s metrics to be viable, there are 1,277 total charities with the highest possible rating, 562 in the areas of health, education and human services)

In other words, GiveWell did enter a market with existing players, indicating that there was a need for things in the broad domain that GiveWell was offering. At the same time, what GiveWell offered was sufficiently different that it was not bogged down by the competition.

Incidentally, in recent times, people from Charity Navigator have been critical of GiveWell and other "effective altruism" proponents. Their critique has itself come for some criticism, and some people have argued that this may be a response to GiveWell's growth leading to it moving the same order of magnitude of money as Charity Navigator (see the discussion here for more). Indeed, in 2013, GiveWell surpassed Charity Navigator in money moved through the website, though we don't have clear evidence of whether GiveWell is cutting into Charity Navigator's growth.

Other precursors (of sorts) to GiveWell, mentioned by William MacAskill in a Facebook comment, are the Poverty Action Lab, Copenhagen Consensus.

How prescient was GiveWell?

With the benefit of hindsight, how impressive do we find GiveWell's early plans in predicting its later trajectory? Note that prescience in predicting the later trajectory could also be interpreted as rigidity of plan and unwillingness to change. But since GiveWell appears to have been quite a success, there is a prior in favor of prescience being good (what I mean is that if GiveWell had failed, the fact that they predicted all the things they'd do would be the opposite of impressive, but given their success, the fact that they predicted things in advance also indicates that they chose good strategy from the outset).

Note that I'm certainly not claiming that a startup's failure to predict the future should be a big strike against it. As long as the organization can adapt to and learn from new information, it's fine. But of course, getting more things right from the start is better to the extent it's feasible.

By and large, both the vision and the specific goals outlined in the plan were quite prescient. I noted the following differences between the plan then and the reality as it transpired:

  • In the plan, GiveWell said it would try to identify top charities in a few select areas (they listed seven areas) and refrain from comparing very different domains. Over the years, they have moved more in the direction of directly comparing different domains and offering a few top charities culled across all domains. Even though they seem to have been off in their plan, they were directionally correct compared to what existed. They were already consolidating different causes within the same broad category. For instance, they write (GiveWell business plan, p. 21):

     

    A charity that focuses on fighting malaria and a charity that focuses on fighting tuberculosis are largely aiming for the same end goal – preventing death – and if one were clearly better at preventing death than the other, it would be reasonable to declare it a better use of funds. By contrast, a charity that focuses on creating economic opportunity has a fundamentally different end goal. It may be theoretically possible to put jobs created and lives saved in the same terms (and there have been some attempts to create metrics that do so), but ultimately different donors are going to have very different perspectives on whether it’s more worthwhile to create a certain number of jobs or prevent a certain number of deaths.

  • GiveWell doesn't predict clearly enough that it will evolve into a more "foundation"-like entity. Note that at the time of the business plan, they were envisionining themselves as deriving their negotiating power with nonprofits through their role as grantmakers. They then transformed into deriving their power largely from their role as recommenders of top charities. Then, around 2012, following the collaboration with Good Ventures, they switched back to grantmaker mode, but in a far grander way than they'd originally envisaged.
  • At the time of the GiveWell business plan, they see their main source of money moved being small donors. In recent years, as they moved to more "foundation"-like behavior, they seem to have started shifting attention to influencing the giving decisions of larger donors. This might be purely due to the unpredictable fact that they joined hands with the Good Ventures foundation, rather than due to any systemic or predictable reasons. It remains to be seen whether they influence more donations by very large donors in the future. Another aspect of this is that GiveWell's original business plan was more ambitious about influencing the large number of small donors out there than (I think) GiveWell is now.
  • GiveWell seems to have moved away from a focus on examining individual charities to understanding the landscape sufficiently well to directly identify the best opportunities, and then to comparing broad causes. The GiveWell business plan, on the other hand, repeatedly talked about "pitting charities against each other" (p. 11) as their main focal activity. In recent years, however, GiveWell has started stepping back and concentrating more on using their big picture understanding of the realm to more efficiently identify the very best opportunities rather than evaluating all relevant charities and causes. This is reflected in their conversation notes as well as the GiveWell Labs initiative. After creating GiveWell Labs, they have shifted more in the direction of thinking at the level of causes rather than individual interventions.

The role of other factors in GiveWell's success

Was GiveWell destined to succeed, or did it get lucky? I believe a mix of both: GiveWell was bound to succeed in some measure, but a number of chance factors played a role in its achieving success to its current level. A recent blog post by GiveWell titled Our work on outreach contains some relevant evidence. The one single person who may have been key to GiveWell's success is the ethicist and philosopher Peter Singer. Singer is a passionate advocate of the idea that people are morally obligated to donate money to help the world's poorest people. Singer played a major role in GiveWell's success in the following ways:

  • Singer both encouraged people to give and directed people interested in giving to GiveWell's website when they asked him where they should give.
  • Singer was an inspiration for many effective giving organizations. He is credited as an inspiration by Oxford ethicist Toby Ord and his wife physician Bernadette Young, who together started Giving What We Can, a society promoting effective giving. Giving What We Can used GiveWell's research for its own recommendations and pointed people to the website. In addition, Singer's book The Life You Can Save also inspired the creation of the eponymous organization. Giving What We Can was a starting point for related organizations in the nascent effective altruism movement, including 80000 Hours, the umbrella group The Centre for Effective Altruism, and many other resources.
  • Cari Tuna and her husband (and Facebook co-founder) Dustin Moskovitz read about GiveWell in The Life You Can Save by Peter Singer around the same time they met Holden through a mutual friend. Good Ventures, the foundation set up by Tuna and Moskovitz has donated several million dollars to GiveWell's recommended charities (over 9 million USD in 2013) and the organizations have collaborated somewhat. More in this blog post by Cari Tuna.

The connection of GiveWell to the LessWrong community might also have been important, though less so than Peter Singer. It could have been due to the efforts of a few people interested in GiveWell who discussed it on LessWrong. Jonah Sinick's LessWrong posts about GiveWell (mentioned in GiveWell's post about their work on outreach) are an example (full disclosure: Jonah Sinick is collaborating with me on Cognito Mentoring). Note that although only about 3% of donations made through GiveWell are explicitly attributable to LessWrong, GiveWell has received a lot of intellectual engagement from the LessWrong community and other organizations and individuals connected with the community.

How should the above considerations modify our view of GiveWell's success? I think the key thing GiveWell did correctly was become a canonical go-to reference for where to direct donors on making good giving decisions. By staking out that space early on, they were able to capitalize on Peter Singer. Also, it's not just GiveWell that benefited from Peter Singer — we can also argue that Singer's arguments were made more effective by the existence of GiveWell. The first line of counterargument to Singer's claim is that most charities aren't cost-effective. Singer's being able to point to a resource to help identify good charities make people take his argument more seriously.

I think that GiveWell's success at making itself the canonical source was more important than the specifics of their research. But the specifics may have been important in convincing a sufficiently large critical mass of influential people to recommend GiveWell as a canonical source, so the factors are hard to disentangle.

Would something like GiveWell have existed if GiveWell hadn't existed? How would the effective altruism movement be different?

These questions are difficult to explore, and discussing them would take us too far afield. This post on the Effective Altruists Facebook thread offers an interesting discussion. The upshot is that, although Giving What We Can was started two years after GiveWell, people involved with its early history say that the core ideas of looking at cost-effectiveness and recommending the very best places to donate money was mooted before its formal inception, some time around 2006 (when GiveWell had not been formally created). At the time, the people involved were unaware of GiveWell. William MacAskill says that GWWC may have done more work on the cost-effectiveness side if GiveWell wasn't already doing it.

I ran this post by Jonah Sinick and also emailed a draft to the GiveWell staff. I implemented some of their suggestions, and am grateful to them for taking the time to comment on my draft. Any responsibility for errors, omissions, and misrepresentations is solely mine.

Supply, demand, and technological progress: how might the future unfold? Should we believe in runaway exponential growth?

11 VipulNaik 11 April 2014 07:07PM

Warning: This is a somewhat long-winding post with a number of loosely related thoughts and no single, cogent thesis. I have included a TL;DR after the introduction, listing the main points. All corrections and suggestions are greatly appreciated.

It's commonly known, particularly to LessWrong readers, that in the world of computer-related technology, key metrics have been doubling fairly quickly, with doubling times ranging from 1 to 3 years for most metrics. The most famous paradigmatic example is Moore's law, which predicts that the number of transistors on integrated circuits doubles approximately every two years. The law itself stood up quite well until about 2005, but broke down after that (see here for a detailed overview of the breakdown by Sebastian Nickel). Another similar proposed law is Kryder's law, which looks at the doubling of hard disk storage capacity. Chapters 2 and 3 of Ray Kurzweil's book The Singularity is Near goes into detail regarding the technological acceleration (for an assessment of Kurzweil's prediction track record, see here).

One of the key questions facing futurists, including those who want to investigate the Singularity, is the question of whether such exponential-ish growth will continue for long enough for the Singularity to be achieved. Some other reasonable possibilities:

  • Growth will continue for a fairly long time, but slow down to a linear pace and therefore we don't have to worry about the Singularity for a very long time.
  • Growth will continue but converge to an asymptotic value (well below the singularity threshold)beyond which improvements aren't possible. Therefore, growth will progressively slow down but still continue as we come closer and closer to the asymptotic value
  • Growth will come to a halt, because there is insufficient demand at the margin for improvement in the technology.

Ray Kurzweil strongly adheres to the exponential-ish growth model, at least for the duration necessary to reach computers that are thousands of times as powerful as humanity (that's what he calls the Singularity). He argues that although individual paradigms (such as Moore's law) eventually run out of steam, new paradigms tend to replace them. In the context of computational speed, efficiency, and compactness, he mentions nanotechnology, 3D computing, DNA computing, quantum computing, and a few other possibilities as candidates for what might take over once Moore's law is exhausted for good.

Intuitively, I've found the assumption of continued exponential growth wrong. I'll hasten to add that I'm mathematically literate and so it's certainly not the case that I fail to appreciate the nature of exponential growth — in fact, I believe my skepticism is rooted in the fact that I do understand exponential growth. I do think the issue is worth investigating, both from the angle of whether the continued improvements are technologically feasible, and from the angle of whether there will be sufficient incentives for people to invest in achieving the breakthroughs. In this post, I'll go over the economics side of it, though I'll include some technology-side considerations to provide context.

TL;DR

I'll make the following general points:

  1. The industries that rely on knowledge goods tend to have long-run downward-sloping supply curves.
  2. Industries based on knowledge goods exhibit experience curve effects: what matters is cumulative demand rather than demand in a given time interval. The irreversibility of creating knowledge goods creates a dynamic different from that in other industries.
  3. What matters for technological progress is what people investing in research think future demand will be like. Bubbles might actually be beneficial if they help lay the groundwork of investment that is helpful for many years to come, even though the investment wasn't rational for individual investors.
  4. Each stage of investment requires a large enough number of people with just the right level of willingness to pay (see the PS for more). A diverse market, with people at various intermediate stages of willingness to pay, is crucial for supporting a technology through its stages of progress.
  5. The technological challenges confronted at improving price-performance tradeoffs may differ for the high, low, and middle parts of the market for a given product. The more similar these challenges, the faster progress is likely to be (because the same research helps with all the market segments together).
  6. The demand-side story most consistent with exponential technological progress is one where people's desire for improvement in the technologies they are using are proportional to the current level of the technologies. But this story seems inconsistent with the facts: people's appetite for improvement probably declines once technologies get good enough. This creates problems for the economic incentive side of the exponential growth story.
  7. Some exponential growth stories require a number of technologies to progress in tandem. Progress in one technology helps facilitate demand for another complementary technology in this story. Such progress scenarios are highly conjunctive, and it is likely that actual progress will fall far short of projected exponential growth.

#1: Short versus long run for supply and demand

In the short run, supply curves are upward-sloping and demand curves are downward-sloping. In particular, this means that when the demand curve expands (more people wanting to buy the item at the same price) then that causes an increase in price and increase in quantity traded (rising demands creates shortages at the current price, motivating suppliers to increase supplies and also charge more money given the competition between buyers). Similarly, if the supply curve expands (more amount of the stuff getting produced at the same price) then that causes a decrease in price and increase in quantity traded. These are robust empirical observations that form the bread and butter of micreconomics, and they're likely true in most industries.

In the long run, however, things become different because people can reallocate their fixed costs. The more important the allocation of fixed costs is to determining the short-run supply curve, the greater the difference between short-run supply curves based on choices of fixed cost allocation. And in particular, if there are increasing returns to scale on fixed costs (for instance, a factory that produces a million widgets costs less than 1000 times a factory that produces a thousand widgets) and fixed costs contribute a large fraction of production costs, then the long-run supply curve might end up be downward-sloping. An industry where the long-run supply curve is downward-sloping is called a decreasing cost industry (see here and here for more). (My original version of this para was incorrect; see CoItInn's comment and my response below it for more).

#2: Introducing technology, the arrow of time, and experience curves

The typical explanation for why some industries are decreasing cost industries is the fixed costs of investment in infrastructure that scale sublinearly with the amount produced. For instance, running ten flights from New York to Chicago costs less than ten times as much as running one flight might. This could be because the ten flights can share some common resources such as airport facilities or even airplanes, and also they can offer backups for one another in case of flight cancellations and overbooking. The fixed costs of setting up a factory that can produce a million hard drives a year is less than 1000 times the fixed cost of setting up a factory that can produce a thousand hard drives a year. A mass transit system for a city of a million people costs less than 100 times as much as a mass transit system for a city of the same area with 10,000 people. These explanations for decreasing cost have only a moderate level of time-directionality. When I talk of time-directionality, I am  thinking of questions like: "What happens if demand is high in one year, and then falls? Will prices go back up?" It is true that some forms of investment in infrastructure are durable, and therefore, once the infrastructure has already been built in anticipation of high demand, costs will continue to stay low even if demand falls back. However, much of the long-term infrastructure can be repurposed causing prices to go back up. If demand for New York-Chicago flights reverts to low levels, the planes can be diverted to other routes. If demand for hard drives falls, the factory producing them can (at some refurbishing cost) produce flash memory or chips or something totally different. As for intra-city mass transit systems, some are easier to repurpose than others: buses can be sold, and physical train cars can be sold, but the rail lines are harder to repurpose. In all cases, there is some time-directionality, but not a lot.

Technology, particularly the knowledge component thereof, is probably an exception of sorts. Knowledge, once created, is very cheap to store, and very hard to destroy in exchange for other knowledge. Consider a decreasing cost industry where a large part of the efficiency of scale is because larger demand volumes justify bigger investments in research and development that lower production costs permanently (regardless of actual future demand volumes). Once the "genie is out of the bottle" with respect to the new technologies, the lower costs will remain — even in the face of flagging demand. However, flagging demand might stall further technological progress.

This sort of time-directionality is closely related to (though not the same as) the idea of experience curve effects: instead of looking at the quantity demanded or supplied per unit time in a given time period, it's more important to consider the cumulative quantity produced and sold, and the economies of scale arise with respect to this cumulative quantity. Thus, people who have been in the business for ten years enjoy a better price-performance tradeoff than people who have been in the business for only three years, even if they've been producing the same amount per year.

The concept of price skimming is also potentially relevant.

#3: The genie out of the bottle, and gaining from bubbles

The "genie out of the bottle" character of technological progress leads to some interesting possibilities. If suppliers think that future demand will be high, then they'll invest in research and development that lowers the long-run cost of production, and those lower costs will stick permanently, even if future demand turns out to be not too high. This depends on the technology not getting lost if the suppliers go out of business — but that's probably likely, given that suppliers are unlikely to want to destroy cost-lowering technologies. Even if they go out of business, they'll probably sell the technology to somebody who is still in business (after all, selling their technology for a profit might be their main way of recouping some of the costs of their investment). Assuming you like the resulting price reductions, this could be interpreted as an argument in favor of bubbles, at least if you ignore the long-term damage that these might impose on people's confidence to invest. In particular, the tech bubble of 1998-2001 spurred significant investments in Internet infrastructure (based on false premises) as well as in the semiconductor industry, permanently lowering the prices of these, and facilitating the next generation of technological development. However, the argument also ignores the fact that the resources spent on the technological development could instead have gone to other even more valuable technological developments. That's a big omission, and probably destroys the case entirely, except for rare situations where some technologies have huge long-term spillovers despite insufficient short-term demand for a rational for-profit investor to justify investment in the technology.

#4: The importance of market diversity and the importance of intermediate milestones being valuable

The crucial ingredient needed for technological progress is that demand from a segment with just the right level of purchasing power should be sufficiently high. A small population that's willing to pay exorbitant amounts won't spur investments in cost-cutting: for instance, if production costs are $10 per piece and 30 people are willing to pay $100 per piece, then pushing production down from $10 to $5 per piece yields a net gain of only $150 — a pittance compared to the existing profit of $2700. On the other hand, if there are 300 people willing to pay $10 per piece, existing profit is zero whereas the profit arising from reducing the cost to $5 per piece is $1500. On the third hand, people willing to pay only $1 per piece are useless in terms of spurring investment to reduce the price to $5, since they won't buy it anyway.

Building on the preceding point, the market segment that plays the most critical role in pushing the frontier of technology can change as the technology improves. Initially, when prices are too high, the segment that pushes technology further would be the small high-paying elite (the early adopters). As prices fall, the market segment that plays the most critical role becomes less elite and less willing to pay. In a sense, the market segments willing to pay more are "freeriding" off the others — they don't care enough to strike a tough bargain, but they benefit from the lower prices resulting from the others who do. Also, market segments for whom the technology is still too expensive are also benefiting in terms of future expectations. Poor people who couldn't afford mobile phones in 1994 benefited from the rich people who generated demand for the phones in 1994, and the middle-income people who generated demand for the phones in 2004, so that now, in 2014, the phones are cost-effective for many of the poor people.

It becomes clear from the above that the continued operation of technological progress depends on the continued expansion of the market into segments that are progressively larger and willing to pay less. Note that the new populations don't have to be different from the old ones — it could happen that the earlier population has a sea change in expectations and demands more from the same suppliers. But it seems like the effect would be greater if the population size expanded and the willingness to pay declined in a genuine sense (see the PS). Note, however, that if the willingness to pay for the new population was dramatically lower than that for the earlier one, there would be too large a gap to bridge (as in the example above, going from customers willing to pay $100 to customers willing to pay $1 would require too much investment in research and development and may not be supported by the market). You need people at each intermediate stage to spur successive stages of investment.

A  closely related point is that even though improving a technology by a huge factor (such as 1000X) could yield huge gains that would, on paper, justify the cost of investment, the costs in question may be too large and the uncertainty may be too high to justify the investment. What would make it worthwhile is if intermediate milestones were profitable. This is related to the point about gradual expansion of the market from a small number of buyers with high willingness to pay to a large number of buyers with low willingness to pay.

In particular, the vision of the Singularity is very impressive, but simply having that kind of end in mind 30 years down the line isn't sufficient for commercial investment in the technological progress that would be necessary. The intermediate goals must be enticing enough.

#5: Different market segments may face different technological challenges

There are two ends at which technological improvement may occur: the frontier end (of the highest capacity or performance that's available commercially) and the low-cost end (the lowest cost at which something useful is available). To some extent, progress at either end helps with the other, but the relationship isn't perfect. The low-cost end caters to a larger mass of low-paying customers and the high-cost end caters to a smaller number of higher-paying customers. If progress on either end complements the other, that creates a larger demand for technological progress on the whole, with each market segment freeriding off the other. If, on the other hand, progress at the two ends requires distinct sets of technological innovations, then overall progress is likely to be slower.

In some cases, we can identify more than two market segments based on cost, and the technological challenge for each market segment differs.

Consider the case of USB flash drives. We can broadly classify the market into three segments:

  • At the high end, there are 1 TB USB 3.0 flash drives worth $3000. These may appeal to power users who like to transfer or back up movies and videos using USB drives regularly.
  • In the middle (which is what most customers in the First World, and their equivalents elsewhere in the world, would consider) are flash drives in the 16-128 GB range with prices ranging from $10-100. These are typically used to transfer documents and install softwares, with the occasional transfer of a movie.
  • At the "low" end are flash drives with 4 GB or less of storage space. These are sometimes ordered in bulk for organizations and distributed to individual members. They may be used by people who are highly cash-constrained (so that even a $10 cost is too much) and don't anticipate needing to transfer huge files over a USB flash drive.

The cost challenges in the three market segments differ:

  • At the high end, the challenges of miniaturization of the design dominate.
  • At the middle, NAND flash memory is a critical determinant of costs.
  • At the low end, the critical factor determining cost is the fixed costs of production, including the costs of packaging. Reducing these costs would presumably involve lowering the fixed costs of production, including cheaper, more automated, more efficient packaging.

Progress in all three areas is somewhat related but not too much. In particular, the middle is the part that has seen the most progress over the last decade or so, perhaps because demand in this sector is most robust and price-sensitive, or because the challenges there are the ones that are easiest to tackle. Note also that the definitions of the low, middle, and high end are themselves subject to change. Ten years ago, there wasn't really a low or high end (more on this in the historical anecdote below). More recently, some disk space values have moved from the high end to the middle end, and others have moved from the middle end to the low end.

#6: How does the desire for more technological progress relate with the current level of a technology? Is it proportional, as per the exponential growth story?

Most of the discussion of laws such as Moore's law and Kryder's law focus on the question of technological feasibility. But demand-side considerations matter, because that's what motivates investments in these technologies. In particular, we might ask: to what extent do people value continued improvements in processing speed, memory, and hard disk space, directly or indirectly?

The answer most consistent with exponential growth is that whatever level you are currently at, you pine for having more in a fixed proportion to what you currently have. For instance, for hard disk space, one theory could be that if you can buy x GB of hard disk space for $1, you'd be really satisfied only with 3x GB of hard disk space for $1, and that this relationship will continue to hold whatever the value of x. This model relates to exponential growth because it means that the incentives for proportional improvement remain constant with time. It doesn't imply exponential growth (we still have to consider technological hurdles) but it does take care of the demand side. On the other hand, if the model were false, it wouldn't falsify exponential growth, but it should make us more skeptical of claims that exponential growth will continue to be robustly supported by market incentives.

How close is the proportional desire model to the reality? I think it's a bad description. I will take a couple of examples to illustrate.

  • Hard disk space: When I started using computers in the 1990s, I worked on a computer with a hard disk size of 270 MB (that included space for the operating system). The hard disk really did get full just with ordinary documents and spreadsheets and a few games played on monochrome screens — no MP3s, no photos, no videos, no books stored as PDFs, and minimal Internet browsing support. When I bought a computer in 2007, it had 120 GB (105 GB accessible) and when I bought a computer last year, it had 500 GB (450 GB accessible). I can say quite categorically that the experiences are qualitatively different. I no longer have to think about disk space considerations when downloading PDFs, books, or music — but keeping hard disk copies of movies and videos might still give me pause in the aggregate. I actually downloaded an offline version of Wikipedia for 10 GB, something that gave me only a small amount of pause with regards to disk space requirements. Do I clamor for an even larger hard disk? Given that I like to store videos and movies and offline Wikipedia, I'd be happy if the next computer I buy (maybe 7-10 years down the line?) had a few terabytes of storage. But the issue lacks anything like the urgency that running out of disk space had back in the day. I probably wouldn't be willing to pay much for improvements in disk space at the margin. And I'm probably at the "use more disk space" extreme of the spectrum — many of my friends have machines with 120 GB hard drives and are nowhere near close to running out of it. Basically, the strong demand imperative that existed in the past for improving  hard drive capacity no longer exists (here's a Facebook discussion I initiated on the subject).
  • USB flash drives: In 2005, I bought a 128 MB USB flash drive for about $50 USD. At the time, things like Dropbox didn't exist, and the Internet wasn't too reliable, so USB flash drives were the best way of both backing and transferring stuff. I would often come close to running out of space on my flash drive just to transfer essential items. In 2012, I bought two 32 GB USB flash drives for a total cost of $32 USD. I used one of them to back up all my documents plus a number of my favorite movies, and still had a few GB to spare. The flash drives do prove inadequate for transferring large numbers of videos and movies, but those are niche needs that most people don't have. It's not clear to me that people would be willing to pay more for a 1 TB USB flash drive (a few friends I polled on Facebook listed reservation prices for a 1 TB USB flash drive ranging from $45 to $85. Currently, $85 is the approximate price of 128 GB USB flash drives; here's the Facebook discussion). At the same time, it's not clear that lowering the cost of production for the 32 GB USB flash drive would significantly increase the number of people who would buy that. On either end, therefore, the incentives for innovation seem low.

#7: Complementary innovation and high conjunctivity of the progress scenario

The discussion of the hard disk and USB flash drive examples suggests one way to rescue the proportional desire and exponential growth views. Namely, the problem isn't with people's desires not growing fast enough, it's with complementary innovations not happening fast enough. In this view, maybe in processor speed improved dramatically, new applications enabled by that would revive the demand for extra hard disk space and NAND flash memory. Possibilities in this direction include highly redundant backup systems (including peer-to-peer backup), extensive internal logging of activity (so that any accidental changes can be easily located and undone),  extensive offline caching of websites (so that temporary lack of connectivity has minimal impact on browsing experience), and applications that rely on large hard disk caching to complement memory for better performance.

This rescues continued exponential growth, but at a high price: we now need to make sure that a number of different technologies are progressing simultaneously. Any one of these technologies slowing down can cause demand for the others to flag. The growth scenario becomes highly conjunctive (you need a lot of particular things to happen simultaneously), and it's highly unlikely to remain reliably exponential over the long run.

I personally think there's some truth to the complementary innovation story, but I think the flagging of demand in absolute terms is also an important component of the story. In other words, even if home processors did get a lot faster, it's not clear that the creative applications this would enable would have enough of a demand to spur innovation in other sectors. And even if that's true at the current margin, I'm not sure how long it will remain true.

This blog post was written in connection with contract work I am doing for the Machine Intelligence Research Institute, but repreesents my own views and has not been vetted by MIRI. I'd like to thank Luke Muehlhauser (MIRI director) for spurring my interest in the subject, Jonah Sinick and Sebastian Nickel for helpful discussions on related matters, and my Facebook friends who commented on the posts I've linked to above.

Comments and suggestions are greatly appreciated.

PS: In the discussion of different market sectors, I argued that the presence of larger populations with lower willingness to pay might be crucial in creating market incentives to further improve a technology. It's worth emphasizing here that the absolute size of the incentive depends on the population more than the willingness to pay. To reduce the product cost from $10 to $5, the profit from a population of 300 people willing to pay at least $10 is $1500, regardless of the precise amount they are willing to pay. But as an empirical matter, accessing larger populations requires going to lower levels of willingness to pay (that's what it means to say that demand curves slope downward). Moreover, the nature of current distribution of disposable wealth (as well as willingness to experiment with technology) around the world is such that the increase in population size is huge as we go down the rung of willingness to pay. Finally, the proportional gain from reducing production costs is higher from populations with lower willingness to pay, and proportional gains might often be better proxies of the incentives to invest than absolute gains.

I made some minor edits to the TL;DR, replacing "downward-sloping demand curves" with "downward-sloping supply curves" and replacing "technological progress" with "exponential technological progress". Apologies for not having proofread the TL;DR carefully before.

Be comfortable with hypocrisy

30 The_Duck 08 April 2014 10:03AM

Neal Stephenson's The Diamond Age takes place several decades in the future and this conversation is looking back on the present day:

"You know, when I was a young man, hypocrisy was deemed the worst of vices,” Finkle-McGraw said. “It was all because of moral relativism. You see, in that sort of a climate, you are not allowed to criticise others-after all, if there is no absolute right and wrong, then what grounds is there for criticism?" [...]

"Now, this led to a good deal of general frustration, for people are naturally censorious and love nothing better than to criticise others’ shortcomings. And so it was that they seized on hypocrisy and elevated it from a ubiquitous peccadillo into the monarch of all vices. For, you see, even if there is no right and wrong, you can find grounds to criticise another person by contrasting what he has espoused with what he has actually done. In this case, you are not making any judgment whatsoever as to the correctness of his views or the morality of his behaviour-you are merely pointing out that he has said one thing and done another. Virtually all political discourse in the days of my youth was devoted to the ferreting out of hypocrisy." [...]

"We take a somewhat different view of hypocrisy," Finkle-McGraw continued. "In the late-twentieth-century Weltanschauung, a hypocrite was someone who espoused high moral views as part of a planned campaign of deception-he never held these beliefs sincerely and routinely violated them in privacy. Of course, most hypocrites are not like that. Most of the time it's a spirit-is-willing, flesh-is-weak sort of thing."

"That we occasionally violate our own stated moral code," Major Napier said, working it through, "does not imply that we are insincere in espousing that code."

I'm not sure if I agree with this characterization of the current political climate; in any case, that's not the point I'm interested in. I'm also not interested in moral relativism.

But the passage does point out a flaw which I recognize in myself: a preference for consistency over actually doing the right thing. I place a lot of stock--as I think many here do--on self-consistency. After all, clearly any moral code which is inconsistent is wrong. But dismissing a moral code for inconsistency or a person for hypocrisy is lazy. Morality is hard. It's easy to get a warm glow from the nice self-consistency of your own principles and mistake this for actually being right.

Placing too much emphasis on consistency led me to at least one embarrassing failure. I decided that no one who ate meat could be taken seriously when discussing animal rights: killing animals because they taste good seems completely inconsistent with placing any value on their lives. Furthermore, I myself ignored the whole concept of animal rights because I eat meat, so that it would be inconsistent for me to assign animals any rights. Consistency between my moral principles and my actions--not being a hypocrite--was more important to me than actually figuring out what the correct moral principles were. 

To generalize: holding high moral ideals is going to produce cognitive dissonance when you are not able to live up to those ideals. It is always tempting--for me at least--to resolve this dissonance by backing down from those high ideals. An alternative we might try is to be more comfortable with hypocrisy. 

 

Related: Self-deception: Hypocrisy or Akrasia?

Universal Fire

56 Eliezer_Yudkowsky 27 April 2007 09:15PM

In L. Sprague de Camp's fantasy story The Incomplete Enchanter (which set the mold for the many imitations that followed), the hero, Harold Shea, is transported from our own universe into the universe of Norse mythology.  This world is based on magic rather than technology; so naturally, when Our Hero tries to light a fire with a match brought along from Earth, the match fails to strike.

I realize it was only a fantasy story, but... how do I put this...

No.

continue reading »

The concept of belief and the nature of abstraction

4 common_law 31 March 2014 08:14PM

[Cross-posted.]

Belief, puzzling to philosophy, is part of psychology’s conceptual framework. The present essay provides a straightforward yet novel theory of the explanatory and predictive value of describing agents as having beliefs. The theory attributes full-fledged beliefs exclusively to agents with linguistic capacities, but it does so as an empirical matter rather than a priori. By treating abstraction as an inherently social practice, the dependence of full-fledged belief on language resolves a philosophical problem regarding its possibility in a world where only concrete particulars exist.

 

The propositional character of belief


It can appear mysterious that the content of epistemic attitudes (belief and opinion) is conveyed by clauses introduced by that: “I believe that the dog is in his house.” If beliefs were causes of behavior, our success in denoting them gives rise to an apparently insurmountable problem: how do propositions—if they exist at all—exist independently of human conduct, so as to be fit for causally explaining it?

While belief ascriptions figure prominently in many behavioral explanations, their propositional form indicates that they pertain to states of information. My belief that my dog is in his house consists of the reliable use of the information that he’s there. Not only will I reply accordingly if asked about his location; in directing other my conduct, I may use that information. If I want the dog to come, I will yell in the direction of his house rather than toward his sofa. Yet, I won’t always use this information: I might absent-mindedly call to my dog on the sofa despite knowing (hence believing) that he is in his house. Believed information can be mistakenly disregarded.

Belief “that p” is a propensity to take p into rational account when p is relevant to the agent’s goals. But taking certain information into account involves also various skills, and it must be facilitated by the appropriate habits. The purposeful availability of believed information is also affected by, besides skills, inhibitions, habits, and desires.

What becomes striking on recognizing beliefs as propensities to use particular information is that behavior can be so successfully explained, when we know something of an agent’s purposes, by reference to the information on which we can predict the agent’s reliance.

Is this successful reliance a unique feature of human cognition? We can use belief ascriptions to describe nonhuman behavior, but we can do the same for machines. The concept of belief, however, isn’t essential to describing nonintelligent machine behavior. When my printer’s light indicates that it is out of paper, I might say it believes it is, particularly if, in fact, the tray is full. But compare it to what is true of me when I run out of paper, where my belief that I have exhausted my supply can explain an indefinitely large set of potential behaviors, from purchasing supplies to postponing work to expressing frustrated rage—in any of an indefinitely large variety of manners. The printer’s “belief” that it is out of paper is expressed in two ways: it refuses to print and a light turns on, and I can refer to these directly, without invoking the concept of belief.

Applying the concept of belief to nonhuman animals is intermediate between applying it to machines or to humans; it can be applied to animals more robustly than to machines. It isn’t preposterous to say that a dog believes his bone is buried at a certain location, particularly if it’s been removed and he still tries to retrieve it from the old location. What can give us pause about saying the dog believes arises from the severely limited conduct that’s influenced by the dog’s information about the bone’s location, as is apparent when the dog fails, except when hungry, to behave territorially toward the bone’s burial place.

Humans differ from canines in our capacity to carry the information constituting a belief’s propositional content to indefinitely many contexts. This makes belief indispensable in forecasting human behavior: without it, we could not exploit the predictive power of knowing what information a human agent is likely to rely on in new contexts.

This cross-contextual consistency in the use of information seems to rest on our having language, which permits (but does not compel!) the insertion of old information into new contexts.

 

The social representation of abstractions


Explaining our cross-contextual capacities is the problem (in the theory of knowledge) of how we manage to mentally represent abstractions. In Kripke’s version of Wittgenstein’s private-language argument, the problem is expressed in the dependence of concepts on extensions that are not rule governed. The social consensus engendered by how others apply words provides a standard against which to measure one’s own word usage.

Abstraction relies, ultimately, on the “wisdom of crowds” in achieving the most instrumentally effective segmentations. The source of abstraction—a form of social coordination—lies in our capacity to intuit (but only approximately) how others apply words.

The capacity to grasp the meanings of others’ words underlies the fruitfulness of using believed propositions to forecast human behavior. With language we can represent the information that another human agent is also able to represent and can transfer to all manner of contexts. But this linguistic requirement for full-fledged belief does not mean that people’s beliefs are always the beliefs they claim (or believe) they have. Language allows us our propositional knowledge about abstract informational states, but that doesn’t imply that we have infallible access to those states—obviously not pertaining to others but not even about ourselves. Nor does it follow that nonlinguistic animals can have full-fledged beliefs limited only by concreteness. Nonlinguistic animals lack full-fledged beliefs about even concrete matters because linguistic representation is the only available means for representing information in a way allowing its introduction to indefinitely varied contexts.

This account relies on a weakened private-language argument to explain abstraction as social consensus. But I reject Wittgenstein’s argument that private language is impossible: we do have propositional states accessible only privately. Wittgenstein’s argument proves too much, as it would impugn also the possibility of linguistic meaning, for which there is no fact of the matter as to how society must extend the meaning to new information. The answer to the strong private-language argument is the propositional structure of perception itself. (See T. Burge, Origins of Objectivity (2010).) What language provides is a consensual standard against which one’s (ultimately idiosyncratic) personal standard can be compared and modified. (Notice that this invokes a dialectic between what I’ve termed “opinion” and “belief.”)

This account of the role if language in abstraction justifies the early 20th-century Russian psychologist Vygotsky’s view that abstract thought is fundamentally linguistic.

The ecological rationality of the bad old fallacies

7 velisar 19 March 2014 11:39AM

I think that the community here may have some of the most qualified people to judge a new frame of studying the fallacies of argumentation with some instruments that psychologists use. I and my friend Dan Ungureanu, a linguist at Charles University in Prague could use some help!

I’ll write a brief introduction on the state of argumentation theory first, for context:

There is such thing as a modern argumentation theory. It can be traced back to the fifties when Perelman and Olbrechts-Tyteca published their New Rhetoric and Toulmin published his The uses of argument. The fallacies of argumentation, now somewhat popular in the folk argumentation culture, have had their turning point when the book Fallacies (Hamblin, 1970) argued that most fallacies are not fallacies at all, they are most of the time the reasonable option. Since then some argumentation schools have taken Hamblin’s challenge and tried to come up with a theory of fallacies. Of them, the Informal logic school and the pragma-dialectics are the most well-known. They even have made empirical experiments to verify their philosophies.

Another normative approach, resumed here by Kaj Sotala in Fallacies as weak Bayesian evidence, is comparing fallacious arguments with the Bayesian norm (Hahn & Oaksford, 2007; also eg. Harris, Hsu & Madsen, 2012; Oaksford & Hahn, 2013).

We cherry-pick a discourse to spot the fallacies. We realized that a couple of years ago when we had to teach the informal fallacies to journalism masters students: we would pick a text that we disagree with, and then search for fallacies. Me and Dan, we often come up with different ones for the same paragraph. They are vague. Than we switched to cognitive biases, as possible explanations for some fallacies, but we were still in the ‘privileging the hypothesis territory’, I would say now, with the benefit of hindsight.

Maybe the world heuristic has already sprung to some of you. I’ve seen this here and somewhere else on the net: fallacies as heuristics. Argumentation theorists only stumbled on this idea recently (Walton, 2010).

Now here’s what this whole intro was for: lesswrong and before overcoming bias are sites build on the idea that we can improve our rationality by doing some things in relation to the now famous Heuristics&Biases program. The heuristics as defined by Twersky and Kahneman are only marginally useful for assessing the heuristic value of a type of argument that we use to call a fallacy. The heuristic elicitation design is maybe a first step: we can see if we have some form of attribute substitution (we always have, if we think that a Bayesian daemon is the benchmark).

We started with the observation that if people generally fall back to some particular activity when they are “lazy”, that activity could be a precious hint about human nature. We believe that it is far easier to spot the fallacy a) when you are looking for it and b) that you are looking for it usually if the topic is interesting, complex, grey: theology, law, politics, health and the like. If indeed the fallacies of argumentation are stable and universal behaviors across (at least some) historical time and across cultures, we can see those “fallacies” as rules of thumb that use other, lower-level fast and frugal heuristics as solid inference rules in the right ecology. Ecological rationality is a match between the environment and the – bounded rational – agent’s decision mechanisms (G. Gigerenzer 1999, V. Smith, 2003).

You can’t just invent a norm and then compare behaviors of organisms or artifacts with it. Not even Bayes rule: the decision of some organisms will have to be Bayesian only in their natural environment (E.T. Jaynes observed this). That is why we need a computational theory of people even when we study arguments: there is no psychology which isn’t evolutionary psychology. We need to know the function, but saying fallacy is about valence, so people traditionally ask why we are so narrow or stupid or, recently, when are the fallacies irrational and when they are not. (no, we don’t want to start again the 1996 polemic between Gigerenzer and Tversky&Kahneman!).

Well, that is what we think, anyway. And if you spot a big flaw, please point it to us before we send our paper to a journal.

Here’s the draft of our paper:

https://www.academia.edu/6271737/The_Ecological_Rationality_of_Argumentation_Fallacies

 

Thanks

Meetup : Detroit/Ann Arbor - Memory Workshop

4 Yvain 17 March 2014 11:45PM

Discussion article for the meetup : Detroit/Ann Arbor - Memory Workshop

WHEN: 23 March 2014 01:00:00PM (-0400)

WHERE: 19334 Angling Street, Livonia, MI

Brienne from CFAR will be coming all the way from California to teach a couple hour workshop on memory and mnemonic techniques with some application to productivity hacking as well. Same location as usual. Donations appreciated.

Additional special guest Robby Bensinger of http://nothingismere.com/ (user: RobbBB)

Discussion article for the meetup : Detroit/Ann Arbor - Memory Workshop

On Irrational Theory of Identity

14 SilentCal 19 March 2014 12:06AM

Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.

 

Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means--maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.

 

Occasionally her friend Bob talks to her about her strange theory of identity. 

 

"Don't you ever wish you had left yourself more of your paycheck?" he once asked.

"I can't remember any of me ever thinking that." Alice replied. "I guess it'd be nice, but I might as well wish yesterday's Bill Gates had sent me his paycheck."

 

Another time, Bob posed the question, "Right now, you allocate yourself enough to survive with the (true) justification that that's a good investment of your funds. But what if that ever ceases to be true?"

Alice resopnded, "When me's have made their allocations, they haven't felt any particular fondness for their successors. I know that's hard to believe from your perspective, but it was years after past me's started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices' decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.

So me's really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won't make it. She won't feel it as a grand sacrifice, either. Last week's Alice didn't have to exert willpower when she cut the food budget based on new nutritional evidence."

 

"Look," Bob said on a third occasion, "your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity."

"I'm not a perfect altruist, and becoming one wouldn't be any easier for me than it would be for you," Alice replied. "And I know the arguments against the uninterrupted-consciousness theory of identity, and they're definitely correct. But I don't alieve a word of it."

"Have you actually tried to internalize them?"

"No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU's published average for people of similar intelligence, conscientiousness, and other relevant traits."

"Hmm," said Bob. "I don't want to make allegations about your motives-"

"You don't have to," Alice interrupted. "The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There's status quo bias, there's the desire not to admit I'm wrong, and there's the fact that I've come to identify with my theory of identity.

I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don't alieve those gains will be mine, so they don't motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?"

 

.

 

.

 

If you wish to ponder Alice's position with relative objectivity before I link it to something less esoteric, please do so before continuing.

 

.

 

.

 

.

 

Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn't sign up for cryonics. He didn't buy any of the usual counterarguments--when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn't motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn't alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.

So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice's answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob's survival and resurrection.

He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to 'correct' himself?

 

Does Carrie have anything to say about this argument?

Channel factors

17 benkuhn 12 March 2014 04:52AM

Or, “how not to make a fundamental attribution error on yourself;” or, “how to do that thing that you keep being frustrated at yourself for not doing;” or, “finding and solving trivial but leveraged inconveniences.”

continue reading »

How to Study Unsafe AGI's safely (and why we might have no choice)

10 Punoxysm 07 March 2014 07:24AM

TL;DR

A serious possibility is that the first AGI(s) will be developed in a Manhattan Project style setting before any sort of friendliness/safety constraints can be integrated reliably. They will also be substantially short of the intelligence required to exponentially self-improve. Within a certain range of development and intelligence, containment protocols can make them safe to interact with. This means they can be studied experimentally, and the architecture(s) used to create them better understood, furthering the goal of safely using AI in less constrained settings.

Setting the Scene

The year is 2040, and in the last decade a series of breakthroughs in neuroscience, cognitive science, machine learning, and computer hardware have put the long-held dream of a human-level artificial intelligence in our grasp. The wild commercial success of lifelike robotic pets, the integration into everyday work and leisure of AI assistants and concierges, and STUDYBOT's graduation from Harvard's Online degree program with an octuple major and full honors, DARPA, the NSF and the European Research Council have announced joint funding of an artificial intelligence program that will create a superhuman intelligence in 3 years.

Safety was announced as a critical element of the project, especially in light of the self-modifying LeakrVirus that catastrophically disrupted markets in 36 and 37. The planned protocols have not been made public, but it seems they will be centered in traditional computer security rather than techniques from the nascent field of Provably Safe AI, which were deemed impossible to integrate on the current project timeline.

Technological and/or Political issues could force the development of AI without theoretical safety guarantees that we'd certainly like, but there is a silver lining

A lot of the discussion around LessWrong and MIRI that I've seen (and I haven't seen all of it, please send links!) seems to focus very strongly on the situation of an AI that can self-modify or construct further AIs, resulting in an exponential explosion of intelligence (FOOM/Singularity). The focus on FAI is on finding an architecture that can be explicitly constrained (and a constraint set that won't fail to do what we desire).

My argument is essentially that there could be a critical multi-year period preceding any possible exponentially self-improving intelligence during which a series of AGIs of varying intelligence, flexibility and architecture will be built. This period will be fast and frantic, but it will be incredibly fruitful and vital both in figuring out how to make an AI sufficiently strong to exponentially self-improve and in how to make it safe and friendly (or develop protocols to bridge the even riskier period between when we can develop FOOM-capable AIs and when we can ensure their safety). 

I'll break this post into three parts.
  1. why is a substantial period of proto-singularity more likely than a straight-to-singularity situation?
  2. Second, what strategies will be critical to developing, controlling, and learning from these pre-FOOM AIs?
  3. Third, what are the political challenge that will develop immediately before and during this period?
Why is a proto-singularity likely?

The requirement for a hard singularity, an exponentially self-improving AI, is that the AI can substantially improve itself in a way that enhances its ability to further improve itself, which requires the ability to modify its own code; access to resources like time, data, and hardware to facilitate these modifications; and the intelligence to execute a fruitful self-modification strategy.

The first two conditions can (and should) be directly restricted. I'll elaborate more on that later, but basically any AI should be very carefully sandboxed (unable to affect its software environment), and should have access to resources strictly controlled. Perhaps no data goes in without human approval or while the AI is running. Perhaps nothing comes out either. Even a hyperpersuasive hyperintelligence will be slowed down (at least) if it can only interact with prespecified tests (how do you test AGI? No idea but it shouldn't be harder than friendliness). This isn't a perfect situation. Eliezer Yudkowsky presents several arguments for why an intelligence explosion could happen even when resources are constrained, (see Section 3 of Intelligence Explosion Microeconomics) not to mention ways that those constraints could be defied even if engineered perfectly (by the way, I would happily run the AI box experiment with anybody, I think it is absurd that anyone would fail it! [I've read Tuxedage's accounts, and I think I actually do understand how a gatekeeper could fail, but I also believe I understand how one could be trained to succeed even against a much stronger foe than any person who has played the part of the AI]).

But the third emerges from the way technology typically develops. I believe it is incredibly unlikely that an AGI will develop in somebody's basement, or even in a small national lab or top corporate lab. When there is no clear notion of what a technology will look like, it is usually not developed. Positive, productive accidents are somewhat rare in science, but they are remarkably rare in engineering (please, give counterexamples!). The creation of an AGI will likely not happen by accident; there will be a well-funded, concrete research and development plan that leads up to it. An AI Manhattan Project described above. But even when there is a good plan successfully executed, prototypes are slow, fragile, and poor-quality compared to what is possible even with approaches using the same underlying technology. It seems very likely to me that the first AGI will be a Chicago Pile, not a Trinity; recognizably a breakthrough but with proper consideration not immediately dangerous or unmanageable. [Note, you don't have to believe this to read the rest of this. If you disagree, consider the virtues of redundancy and the question of what safety an AI development effort should implement if they can't be persuaded to delay long enough for theoretically sound methods to become available].

A Manhattan Project style effort makes a relatively weak, controllable AI even more likely, because not only can such a project implement substantial safety protocols that are explicitly researched in parallel with primary development, but also because the total resources, in hardware and brainpower, devoted to the AI will be much greater than a smaller project, and therefore setting a correspondingly higher bar for the AGI thus created to reach to be able to successfully self-modify itself exponentially and also break the security procedures.

Strategies to handle AIs in the proto-Singularity, and why they're important

First, take a look the External Constraints Section of this MIRI Report and/or this article on AI Boxing. I will be talking mainly about these approaches. There are certainly others, but these are the easiest to extrapolate from current computer security.

These AIs will provide us with the experimental knowledge to better handle the construction of even stronger AIs. If careful, we will be able to use these proto-Singularity AIs to learn about the nature of intelligence and cognition, to perform economically valuable tasks, and to test theories of friendliness (not perfectly, but well enough to start). 

"If careful" is the key phrase. I mentioned sandboxing above. And computer security is key to any attempt to contain an AI. Monitoring the source code, and setting a threshold for too much changing too fast at which point a failsafe freezes all computation; keeping extremely strict control over copies of the source. Some architectures will be more inherently dangerous and less predictable than others. A simulation of a physical brain, for instance, will be fairly opaque (depending on how far neuroscience has gone) but could have almost no potential to self-improve to an uncontrollable degree if its access to hardware is limited (it won't be able to make itself much more efficient on fixed resources). Other architectures will have other properties. Some will be utility optimizing agents. Some will have behaviors but no clear utility. Some will be opaque, some transparent.

All will have a theory to how they operate, which can be refined by actual experimentation. This is what we can gain! We can set up controlled scenarios like honeypots to catch malevolence. We can evaluate our ability to monitor and read the thoughts of the agi. We can develop stronger theories of how damaging self-modification actually is to imposed constraints. We can test our abilities to add constraints to even the base state. But do I really have to justify the value of experimentation?

I am familiar with criticisms based on absolutley incomprehensibly perceptive and persuasive hyperintelligences being able to overcome any security, but I've tried to outline above why I don't think we'd be dealing with that case.

Political issues

Right now AGI is really a political non-issue. Blue sky even compared to space exploration and fusion both of which actually receive funding from government in substantial volumes. I think that this will change in the period immediately leading up to my hypothesized AI Manhattan Project. The AI Manhattan Project can only happen with a lot of political will behind it, which will probably mean a spiral of scientific advancements, hype and threat of competition from external unfriendly sources. Think space race.

So suppose that the first few AIs are built under well controlled conditions. Friendliness is still not perfected, but we think/hope we've learned some valuable basics. But now people want to use the AIs for something. So what should be done at this point?

I won't try to speculate what happens next (well you can probably persuade me to, but it might not be as valuable), beyond extensions of the protocols I've already laid out, hybridized with notions like Oracle AI. It certainly gets a lot harder, but hopefully experimentation on the first, highly-controlled generation of AI to get a better understanding of their architectural fundamentals, combined with more direct research on friendliness in general would provide the groundwork for this.

View more: Next