Assessing Kurzweil: the results
Predictions of the future rely, to a much greater extent than in most fields, on the personal judgement of the expert making them. Just one problem - personal expert judgement generally sucks, especially when the experts don't receive immediate feedback on their hits and misses. Formal models perform better than experts, but when talking about unprecedented future events such as nanotechnology or AI, the choice of the model is also dependent on expert judgement.
Ray Kurzweil has a model of technological intelligence development where, broadly speaking, evolution, pre-computer technological development, post-computer technological development and future AIs all fit into the same exponential increase. When assessing the validity of that model, we could look at Kurzweil's credentials, and maybe compare them with those of his critics - but Kurzweil has given us something even better than credentials, and that's a track record. In various books, he's made predictions about what would happen in 2009, and we're now in a position to judge their accuracy. I haven't been satisfied by the various accuracy ratings I've found online, so I decided to do my own assessments.
I first selected ten of Kurzweil's predictions at random, and gave my own estimation of their accuracy. I found that five were to some extent true, four were to some extent false, and one was unclassifiable
But of course, relying on a single assessor is unreliable, especially when some of the judgements are subjective. So I started a call for volunteers to get assessors. Meanwhile Malo Bourgon set up a separate assessment on Youtopia, harnessing the awesome power of altruists chasing after points.
The results are now in, and they are fascinating. They are...
Thoughts on the Singularity Institute (SI)
This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them.
September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements.
The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.)
I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell.
Summary of my views
- The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More
- SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More
- A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More
- My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.)
- I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More
- There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me to change my mind that are likely to work better than posting comments. (Of course I encourage people to post comments; I'm just noting in advance that this action, alone, doesn't guarantee that I will consider your argument.) More
Intent of this post
I did not write this post with the purpose of "hurting" SI. Rather, I wrote it in the hopes that one of these three things (or some combination) will happen:
- New arguments are raised that cause me to change my mind and recognize SI as an outstanding giving opportunity. If this happens I will likely attempt to raise more money for SI (most likely by discussing it with other GiveWell staff and collectively considering a GiveWell Labs recommendation).
- SI concedes that my objections are valid and increases its determination to address them. A few years from now, SI is a better organization and more effective in its mission.
- SI can't or won't make changes, and SI's supporters feel my objections are valid, so SI loses some support, freeing up resources for other approaches to doing good.
Which one of these occurs will hopefully be driven primarily by the merits of the different arguments raised. Because of this, I think that whatever happens as a result of my post will be positive for SI's mission, whether or not it is positive for SI as an organization. I believe that most of SI's supporters and advocates care more about the former than about the latter, and that this attitude is far too rare in the nonprofit world.
Prediction is hard, especially of medicine
Summary: medical progress has been much slower than even recently predicted.
In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.
Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.
The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.
Rational Home Buying
My parents are considering moving house. I've had a front-seat window to their decision process as they compare alternatives, and sometimes it isn't pretty.
A new house is one of the most important purchases most people will make. Because of the sums involved, the usual pitfalls of decision-making gain new importance, and it becomes especially important to make sure you're thinking rationally. Research in a couple of fields, most importantly positive psychology, offers some potentially helpful tips.
SIAI - An Examination
12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.
Disclaimer
- I am not affiliated with the Singularity Institute for Artificial Intelligence.
- I have not donated to the SIAI prior to writing this.
- I made this pledge prior to writing this document.
Notes
- Images are now hosted on LessWrong.com.
- The 2010 Form 990 data will be available later this month.
- It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.
Introduction
Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.
I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.
So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.
My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.
Towards a Bay Area Less Wrong Community
Follow up to: Less Wrong NYC
Tl;dr: Two new regular weekly meetups in the Bay Area: In the Berkeley Starbucks on Wednesdays at 7pm (host Lucas Sloan), and in Tortuga (in Mountain View) on Thursdays at 7pm (hosts Shannon Friedman and Divia Melwani). New Google Group for the whole Bay Area, all welcome to join.
Hi everyone in the (San Fransisco) Bay Area. I'm Lucas Sloan and I've been organizing LW meet ups in Berkeley for about 8 months now. I think that we've accomplished great things in that time, the last week's had about 40 people show up, which is a number that was beyond my wildest dreams when I held my first meet up and 7 people showed up. As good as things are, I've been spending a lot of time thinking how we can do even better in the future. The main catalyst in my thinking has been the accounts I've been hearing over the last two months from people who've visited the New York Less Wrong group and the amazingly positive reactions people have had to their accomplishments. Now that Cosmos has written a post describing what he sees as their successes, I think now is an excellent time to start a discussion about the future of the Bay Area Less Wrong group, and how to make it awesome.
The main thing that the New York group has that I want for the Bay Area group is a sense of being a close-knit community of like-minded friends. At a Berkeley meet up we get into all sorts of very interesting conversations with our fellow rationalists, but I don't feel a personal connection with most of the people who come to meet-ups, even those people I've seen at many - I am friendly with everyone who comes to meet-ups, but I am not friends with everyone who comes. I see two things that contribute to this problem (though I'm sure there are more) - size of meet-ups, and the frequency of meet ups. The large size of meet ups makes it impossible to establish rapport with everyone, because there is no way to have a good conversation with 40 other people in 4 hours. Even more insidious, the large size makes it hard to establish rapport with even a subset of the people who come to a meet up - the group of 40 splits into 10 groups of 4 and everyone keeps churning between conversations as their interest wanes and waxes. The first meet up I held, with only 7 people, was socially fulfilling in a way that recent ones simply haven't been - everyone was participating in the same conversation, and everyone was getting to know everyone else. As to the frequency of meet ups, it's hard to become friends with people you only interact with once a month - you can easily forget a person in a month, and the format encourages talking about high minded "rational" topics, not the personal small talk that forms the basis of friendship.
Less Wrong Rationality and Mainstream Philosophy
Part of the sequence: Rationality and Philosophy
Despite Yudkowsky's distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free will, metaphysics, metaethics, normative ethics, machine ethics, axiology, philosophy of mind, and more.
Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called "Quinean naturalism" after Harvard's W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I'm not talking about an obscure movement in philosophy.
Let us survey the connections. Quine thought that philosophy was continuous with science - and where it wasn't, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn't try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:
The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?
But isn't this using science to justify science? Isn't that circular? Not quite, say Quine and Yudkowsky. It is merely "reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else." Luckily, the brain is the lens that sees its flaws. And thus, says Quine:
Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.
Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it."
When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
Taking Ideas Seriously
I, the author, no longer endorse this post.
Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.
Religion's Claim to be Non-Disprovable
The earliest account I know of a scientific experiment is, ironically, the story of Elijah and the priests of Baal.
The people of Israel are wavering between Jehovah and Baal, so Elijah announces that he will conduct an experiment to settle it - quite a novel concept in those days! The priests of Baal will place their bull on an altar, and Elijah will place Jehovah's bull on an altar, but neither will be allowed to start the fire; whichever God is real will call down fire on His sacrifice. The priests of Baal serve as control group for Elijah - the same wooden fuel, the same bull, and the same priests making invocations, but to a false god. Then Elijah pours water on his altar - ruining the experimental symmetry, but this was back in the early days - to signify deliberate acceptance of the burden of proof, like needing a 0.05 significance level. The fire comes down on Elijah's altar, which is the experimental observation. The watching people of Israel shout "The Lord is God!" - peer review.
And then the people haul the 450 priests of Baal down to the river Kishon and slit their throats. This is stern, but necessary. You must firmly discard the falsified hypothesis, and do so swiftly, before it can generate excuses to protect itself. If the priests of Baal are allowed to survive, they will start babbling about how religion is a separate magisterium which can be neither proven nor disproven.
My Kind of Reflection
Followup to: Where Recursive Justification Hits Bottom
In "Where Recursive Justification Hits Bottom", I concluded that it's okay to use induction to reason about the probability that induction will work in the future, given that it's worked in the past; or to use Occam's Razor to conclude that the simplest explanation for why Occam's Razor works is that the universe itself is fundamentally simple.
Now I am far from the first person to consider reflective application of reasoning principles. Chris Hibbert compared my view to Bartley's Pan-Critical Rationalism (I was wondering whether that would happen). So it seems worthwhile to state what I see as the distinguishing features of my view of reflection, which may or may not happen to be shared by any other philosopher's view of reflection.
• All of my philosophy here actually comes from trying to figure out how to build a self-modifying AI that applies its own reasoning principles to itself in the process of rewriting its own source code. So whenever I talk about using induction to license induction, I'm really thinking about an inductive AI considering a rewrite of the part of itself that performs induction. If you wouldn't want the AI to rewrite its source code to not use induction, your philosophy had better not label induction as unjustifiable.
• One of the most powerful general principles I know for AI in general, is that the true Way generally turns out to be naturalistic—which for reflective reasoning, means treating transistors inside the AI, just as if they were transistors found in the environment; not an ad-hoc special case. This is the real source of my insistence in "Recursive Justification" that questions like "How well does my version of Occam's Razor work?" should be considered just like an ordinary question—or at least an ordinary very deep question. I strongly suspect that a correctly built AI, in pondering modifications to the part of its source code that implements Occamian reasoning, will not have to do anything special as it ponders—in particular, it shouldn't have to make a special effort to avoid using Occamian reasoning.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)