Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

So You Want to Save the World

41 lukeprog 01 January 2012 07:39AM

This post is very out-of-date. See MIRI's research page for the current research agenda.

So you want to save the world. As it turns out, the world cannot be saved by caped crusaders with great strength and the power of flight. No, the world must be saved by mathematicians, computer scientists, and philosophers.

This is because the creation of machine superintelligence this century will determine the future of our planet, and in order for this "technological Singularity" to go well for us, we need to solve a particular set of technical problems in mathematics, computer science, and philosophy before the Singularity happens.

The best way for most people to save the world is to donate to an organization working to solve these problems, an organization like the Singularity Institute or the Future of Humanity Institute.

Don't underestimate the importance of donation. You can do more good as a philanthropic banker than as a charity worker or researcher.

But if you are a capable researcher, then you may also be able to contribute by working directly on one or more of the open problems humanity needs to solve. If so, read on...

continue reading »

Prediction is hard, especially of medicine

47 gwern 23 December 2011 08:34PM

Summary: medical progress has been much slower than even recently predicted.

In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.

Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.

The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.

continue reading »

$100 off for Less Wrong: Singularity Summit 2011 on Oct 15 - 16 in New York

6 Louie 28 September 2011 02:01AM

There's still time left to register for the Singularity Summit in New York. But hurry because there are only a few weeks left!

Register now so you can meet Eliezer, AnnaSalamon, Lukeprog, and more! I'm particularly excited about several invited speakers, such as neuroscientist Christof Koch (who I've blogged about here) and author Sharon Bertsch McGrayne who recently published The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy.

Register now and use the $100 off Less Wrong discount code: LW2011. Hope to see you next month in New York!

Ray Kurzweil Eliezer Yudkowsky Anna Salamon Luke Muehlhauser

 

continue reading »

Safety Culture and the Marginal Effect of a Dollar

23 jimrandomh 09 June 2011 03:59AM

We spent an evening at last week's Rationality Minicamp brainstorming strategies for reducing existential risk from Unfriendly AI, and for estimating their marginal benefit-per-dollar. To summarize the issue briefly, there is a lot of research into artificial general intelligence (AGI) going on, but very few AI researchers take safety seriously; if someone succeeds in making an AGI, but they don't take safety seriously or they aren't careful enough, then it might become very powerful very quickly and be a threat to humanity. The best way to prevent this from happening is to promote a safety culture - that is, to convince as many artificial intelligence researchers as possible to think about safety so that if they make a breakthrough, they won't do something stupid.

We came up with a concrete (albeit greatly oversimplified) model which suggests that the marginal reduction in existential risk per dollar, when pursuing this strategy, is extremely high. The model is this: assume that if an AI is created, it's because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously. In this model, the goal is to convince as many researchers as possible to take safety seriously. So the question is: how many researchers can we convince, per dollar? Some people are very easy to convince - some blog posts are enough. Those people are convinced already. Some people are very hard to convince - they won't take safety seriously unless someone who really cares about it will be their friend for years. In between, there are a lot of people who are currently unconvinced, but would be convinced if there were lots of good research papers about safety in machine learning and computer science journals, by lots of different authors.

Right now, those articles don't exist; we need to write them. And it turns out that neither the Singularity Institute nor any other organization has the resources - staff, expertise, and money to hire grad students - to produce very much research or to substantially alter the research culture. We are very far from the realm of diminishing returns. Let's make this model quantitative.

Let A be the probability that an AI will be created; let R the fraction of researchers that would be convinced to take safety seriously if there were a 100 good papers in about it in the right journals; and let C be the cost of one really good research paper. Then the marginal reduction in existential risk per dollar is A*R/100*C. The total cost of a grad student-year (including recruiting, management and other expenses) is about $100k. Estimate a 10% current AI risk, and estimate that 30% of researchers currently don't take safety seriously but would be convinced. That gives is a marginal existential risk reduction per dollar of 0.1*0.3/100*100k = 3*10^-9. Counting only the ~7 billion people alive today, and not any of the people who will be born in the future, this comes to a little over two expected lives saved per dollar.

That's huge. Enormous. So enormous that I'm instantly suspicious of the model, actually, so let's take note of some of the things it leaves out. First, the "one researcher at random determines the fate of humanity" part glosses over the fact that research is done in groups; but it's not clear whether adding in this detail should make us adjust the estimate up or down. It ignores all the time we have between now and the creation of the first AI, during which a safety culture might arise without intervention; but it's also easier to influence the culture now, while the field is still young, rather than later. In order for promoting AI research safety to not be an extraordinarily good deal for philanthropists, there would have to be at least an additional 10^3 penalty somewhere, and I can't find one.

As a result of this calculation, I will be thinking and writing about AI safety, attempting to convince others of its importance, and, in the moderately probable event that I become very rich, donating money to the SIAI so that they can pay others to do the same.

Subjective Relativity, Time Dilation and Divergence

14 jacob_cannell 11 February 2011 07:50AM

And the whole earth was of one language, and of one speech. And it came to pass . . .they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth. And the Lord came down to see the city and the tower, which the children built. And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do; and now nothing will be restrained from them, which they have imagined to do. Go to, let us go down, and there confound their language, that they may not understand one another's speech. So the Lord scattered them abroad from thence upon the face of all the earth: and they left off to build . . . 

Genesis 11: 1-9

Some elementary physical quantitative properties of systems compactly describe a wide spectrum of macroscopic configurations.  Take for example the concept of temperature: given a basic understanding of physics this single parameter compactly encodes a powerful conceptual mapping of state-space.  

It is easy for your mind to visualize how a large change in temperature would effect everything from your toast to a planetary ecosystem.  It is one of the key factors which divides habitable planets such as Earth from inhospitably cold worlds like Mars or burning infernos such as Venus.  You can imagine the Earth growing hotter and visualize an entire set of complex consequences: melting ice caps, rising water levels, climate changes, eventual loss of surface water, runaway greenhouse effect and a scorched planet.

Here is an unconsidered physical parameter that could determine much of the future of civilization: the speed of thought and the derived subjective speed of light.  

continue reading »

David Chalmers' "The Singularity: A Philosophical Analysis"

33 lukeprog 29 January 2011 02:52AM

David Chalmers is a leading philosopher of mind, and the first to publish a major philosophy journal article on the singularity:

Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65.

Chalmers' article is a "survey" article in that it doesn't cover any arguments in depth, but quickly surveys a large number of positions and arguments in order to give the reader a "lay of the land." (Compare to Philosophy Compass, an entire journal of philosophy survey articles.) Because of this, Chalmers' paper is a remarkably broad and clear introduction to the singularity.

Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the singularity seriously.

Below is a CliffsNotes of the paper for those who don't have time to read all 58 pages of it.

 

The Singularity: Is It Likely?

Chalmers focuses on the "intelligence explosion" kind of singularity, and his first project is to formalize and defend I.J. Good's 1965 argument. Defining AI as being "of human level intelligence," AI+ as AI "of greater than human level" and AI++ as "AI of far greater than human level" (superintelligence), Chalmers updates Good's argument to the following:

  1. There will be AI (before long, absent defeaters).
  2. If there is AI, there will be AI+ (soon after, absent defeaters).
  3. If there is AI+, there will be AI++ (soon after, absent defeaters).
  4. Therefore, there will be AI++ (before too long, absent defeaters).

By "defeaters," Chalmers means global catastrophes like nuclear war or a major asteroid impact. One way to satisfy premise (1) is to achieve AI through brain emulation (Sandberg & Bostrom, 2008). Against this suggestion, Lucas (1961), Dreyfus (1972), and Penrose (1994) argue that human cognition is not the sort of thing that could be emulated. Chalmers (1995; 1996, chapter 9) has responded to these criticisms at length. Briefly, Chalmers notes that even if the brain is not a rule-following algorithmic symbol system, we can still emulate it if it is mechanical. (Some say the brain is not mechanical, but Chalmers dismisses this as being discordant with the evidence.)

continue reading »

Theists are wrong; is theism?

5 Will_Newsome 20 January 2011 12:18AM

Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?

I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.

It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.

Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?

Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.

Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.

 


 

1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.

2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.

Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).

I

51 PhilGoetz 08 January 2011 05:51PM

I wrote this story at Michigan State during Clarion 1997, and it was published in the Sept/Oct 1998 issue of Odyssey.  It has many faults and anachronisms that still bother me.  I'd like to say that this is because my understanding of artificial intelligence and the singularity has progressed so much since then; but it has not.  Many anachronisms and implausibilities are compromises between wanting to be accurate, and wanting to communicate.

At least I can claim the distinction of having published the story with the shortest title in the English language - measured horizontally.

I

I was the last person, and this is how he died.

continue reading »

An Xtranormal Intelligence Explosion

4 James_Miller 07 November 2010 11:42PM

The Curve of Capability

18 rwallace 04 November 2010 08:22PM

or: Why our universe has already had its one and only foom

In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

That's a pretty important rule, so let's test it by looking at some more examples.

continue reading »

View more: Next