Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AGI Quotes

6 Post author: lukeprog 02 November 2011 08:25AM

Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.

  • Please post all quotes separately, so that they can be voted up/down separately.  (If they are strongly related, reply to your own comments.  If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote comments/posts on LW/OB.

Comments (88)

Comment author: James_Miller 02 November 2011 02:07:24PM *  27 points [-]

The best answer to the question, "Will computers ever be as smart as humans?” is probably “Yes, but only briefly.”

Vernor Vinge

Comment author: hankx7787 04 November 2011 11:49:32AM 14 points [-]

"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen, SL4

Comment author: ciphergoth 16 November 2011 12:37:53PM 5 points [-]
Comment author: MichaelAnissimov 18 November 2011 08:04:22PM 1 point [-]

This is one of the earliest quotes I read that made it click that nothing I could do with my life would have greater impact than pursuing superintelligence.

Comment author: lukeprog 02 November 2011 09:01:27AM 13 points [-]

Machines can within certain limits beget machines of any class, no matter how different to themselves... Complex now, but how much simpler and more intelligibly organised may [a machine] not become in another hundred thousand years? or in twenty thousand? For man at present believes that his interest lies in that direction; he spends an incalculable amount of labour and time and thought in making machines breed always better and better; he has already succeeded in effecting much that at one time appeared impossible, and there seem no limits to the results of accumulated improvements if they are allowed to descend with modification from generation to generation. It must always be remembered that man’s body is what it is through having been moulded into its present shape by the chances and changes of many millions of years, but that his organisation never advanced with anything like the rapidity with which that of the machines is advancing. This is the most alarming feature in the case, and I must be pardoned for insisting on it so frequently.

Samuel Butler (1872)

Comment author: lukeprog 02 November 2011 08:46:29AM 11 points [-]

Though we have to live and work with (and against) today's mechanical morons, their deficiencies should not blind us to the future. In particular, it should be realized that as soon as the borders of electronic intelligence are passed, there will be a kind of chain reaction, because the machines will rapidly improve themselves... there will be a mental explosion; the merely intelligent machine will swiftly give way to the ultraintelligent machine.... Perhaps our role on this planet is not to worship God but to create Him.

Arthur C. Clarke (1968)

Comment author: lukeprog 02 November 2011 08:26:38AM *  11 points [-]

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

Eliezer Yudkowsky (2008)

Comment author: James_Miller 02 November 2011 08:15:19PM *  3 points [-]

EY changed it in the published version to:

"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."

Comment author: lukeprog 02 November 2011 08:30:38PM 6 points [-]

My favorite paraphrase is my own:

"The AI does not hate you, nor does it love you, but you are made of atoms it can use for something else."

I like the rhythm of this one best. It can be sung.

Comment author: Nisan 03 November 2011 05:58:38PM -2 points [-]

Let us walk together to the kirk, and all together pray, while each to our great AI bends — old men, and babes, and loving friends, and youths and maidens gay! :D

Comment author: Miller 03 November 2011 05:42:06AM *  1 point [-]

Whether the AI loves -- or hates, you cannot fathom, but plans it has indeed for your atoms.

Comment author: lukeprog 02 November 2011 08:45:00AM *  8 points [-]

...it seems probable that once the machine thinking method has started, it would not take long to outstrip our feeble powers... At some stage therefore we should have to expect the machines to take control...

Alan Turing (1951)

Comment author: hankx7787 04 November 2011 11:50:07AM 6 points [-]

"In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority." - Eliezer Yudkowsky

Comment author: James_Miller 02 November 2011 01:55:20PM 6 points [-]

If the Singularity can not be prevented or confined, just how bad could the Post-Human era be? Well ... pretty bad.

Vernor Vinge

... in original.

Comment author: lukeprog 02 November 2011 08:45:18AM 6 points [-]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Good (1965)

Comment author: ShardPhoenix 02 November 2011 12:07:44PM 8 points [-]

The use of "unquestionably" in this quote has always irked me a bit, despite the fact that I find the general concept reasonable.

Comment author: lukeprog 02 November 2011 08:44:39AM 6 points [-]

...we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race... the time will come when the machines will hold the real supremacy over the world and its inhabitants.

Samuel Butler (1863)

Comment author: lukeprog 02 November 2011 08:32:41AM *  22 points [-]

The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim.

Edsger Dijkstra (1984)

Comment author: James_Miller 02 November 2011 02:15:05PM 1 point [-]

I don't understand this.

Comment author: betterthanwell 02 November 2011 03:15:19PM *  11 points [-]

It is seemingly easy to get stuck in arguments over whether or not machines can "actually" think.
It is sufficient to assess the effects or outcomes of the phenomenon in question.

By sidestepping the question of what, exactly, it means to "think",
we can avoid arguing over definitions, yet lose nothing of our ability to model the world.

Does a submarine swim? The purpose of swimming is to propel oneself through the water. A nuclear powered submarine can propel itself through the oceans at full speed for months at a time. It achieves the purpose of swimming, and does so rather better than a fish, or a human.

If the purpose of thinking is isomorphic to:

Model the world in order to formulate plans for executing actions which implement goals.

Then, if a machine can achieve the above we can say it achieves the purpose of thinking,
akin to how a submarine successfully achieves the purpose of swimming.

Discussion of whether the machine really thinks is now superfluous.

Comment author: thomblake 02 November 2011 02:48:06PM 4 points [-]

It is a similar idea as that proposed by Turing. If you have submarines, and they move through the water and do exactly what you want them to do, then it is rather pointless to ask if what they're doing is "really swimming". And the arguments on both sides of the "swimming" dispute will make reference to fish.

Comment author: lukeprog 08 June 2013 09:05:57AM 5 points [-]

The machines will do what we ask them to do and not what we ought to ask them to do.

Norbert Weiner (1949)

Comment author: lukeprog 25 March 2012 10:53:19PM 5 points [-]

I once remarked that to design ultraintelligent machines was to play with fire, that we had played with fire once before, and it had kept the other animals at bay. Arthur Clarke's reply was that this time we are the other animals.

I.J. Good (1970)

Comment author: lukeprog 02 November 2011 09:57:11AM 5 points [-]

Compare any kind of machine you may happen to think of with what its ancestor was only twenty-five years ago. Its efficiency has doubled, trebled... By knowledge alone man might extinguish himself utterly... Man's further task is... to learn how best to live with these powerful creatures of his mind [the machines], how to give their fecundity a law and... how not to employ them in error against himself.

Garet Garrett (1926)

Comment author: Halfwit 10 June 2013 08:47:46PM *  4 points [-]

The mathematician John von Neumann, born Neumann Janos in Budapest in 1903, was incomparably intelligent, so bright that, the Nobel Prize-winning physicist Eugene Wigner would say, "only he was fully awake." One night in early 1945, von Neumann woke up and told his wife, Klari, that "what we are creating now is a monster whose influence is going to change history, provided there is any history left. Yet it would be impossible not to see it through." Von Neumann was creating one of the first computers, in order to build nuclear weapons. But, Klari said, it was the computers that scared him the most.

Konstantin Kakaes

Comment author: lukeprog 02 November 2011 10:15:00AM 4 points [-]

"It seems inevitable that sometime in this century, Moore's Law combined with greater understanding of intelligence itself will drive machine intelligence to levels beyond, and soon thereafter, beyond anything we can imagine. When intelligent machines begin designing superintelligence machines and their software, intelligence should grow exponentially. The result could be a runaway intelligence explosion.

T.M. Georges (2004)

Comment author: ciphergoth 16 November 2011 12:43:44PM 0 points [-]

Where did he say this? A search turns up only this page. Thanks!

Comment author: lukeprog 16 November 2011 04:28:58PM 0 points [-]

In Digital Soul, if I recall correctly.

Comment author: lukeprog 02 November 2011 08:29:04AM *  4 points [-]

If, then, men... were only machines of so complicated a make that it was less trouble to us to cut the difficulty and say that that kind of mechanism was 'being alive,' why should not machines ultimately become as complicated as we are... and to be indeed as living as it was in the nature of anything at all to be?

Samuel Butler (1880)

Comment author: lukeprog 02 November 2011 08:27:17AM 8 points [-]

To educate [someone] in mind and not in morals is to educate a menace to society.

Theodore Roosevelt

Comment author: James_Miller 02 November 2011 01:57:29PM 14 points [-]

We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?

Michael Anissimov

Comment author: Logos01 04 November 2011 02:05:11AM *  0 points [-]

... I wonder how "alone" I am in the notion that AGI causing human extinction may not be a net negative, in that so long as it is a sentient product of human endeavors it is essentially a "continuation" of humanity.

Comment author: JoshuaZ 04 November 2011 02:10:34AM *  4 points [-]

Two problems: An obnoxious optimizing process isn't necessarily sentient. And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?

If it helps ask yourself how you feel about a human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of "In the Prisoners' Dilemma, Humanity Defects!" That sounds pretty bad doesn't it? Now note that the AGI expansion is probably worse than that.

Comment author: Logos01 04 November 2011 02:13:46AM -2 points [-]

Two problems: An obnoxious optimizing process isn't necessarily sentient.

Hence my caveat.

And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?

I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.

If it helps ask yourself how you feel about human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of "In the Prisoner's Dilemma, Humanity Defects!" That sounds pretty bad doesn't it?

Not especially, no.

Comment author: JoshuaZ 04 November 2011 02:39:48AM 3 points [-]

I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.

It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.

Not especially, no.

And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability. As a result, and to help prevent problems, everyone but a tiny elite is denied any form of life-extension technology. Even the elite has their lifespan only extended to about a 130 to prevent anyone from accumulating too much power and threatening the standard oligarchy. Similarly, new ideas for businesses are ruthlessly suppressed. Most people will have less mobility in this setting than an American living today. Planets will be ruthlessly terraformed and then have colonists forcively shipped their to help start the new groups. Most people have the equivalent of reality TV shows and the hope of the winning the lottery to entertain themselves. Most of the population is so ignorant that they don't even realize that humans originally came from a single planet.

If this isn't clear, I'm trying to make this about as dystopian as I plausibly can. If I haven't succeeded at that, please imagine what you would think of as a terrible dystopia and apply that. If really necessary, imagine some puppy and kitten torturing too.

Comment author: hankx7787 04 November 2011 11:49:46AM 7 points [-]

"There are lots of people who think that if they can just get enough of something, a mind will magically emerge. Facts, simulated neurons, GA trials, proposition evaluations/second, raw CPU power, whatever. It's an impressively idiotic combination of mental laziness and wishful thinking." - Michael Wilson

Comment author: lukeprog 25 May 2012 02:42:12AM 3 points [-]

Why we're doomed reason #692...

Here is Hugo de Garis, in the opening of The Artilect War:

You may ask, "Well, if you are so concerned about the negative impact of your work [on artificial brains] on humanity, why don't you stop it and do something else?" The truth is, I feel that I'm constructing something that may become rather godlike in future decades... The prospect of building godlike creatures fills me with a sense of religious awe that... motivates me powerfully to continue, despite the possible horrible negative consequences.

Comment author: lukeprog 25 March 2012 10:47:08PM *  3 points [-]

The first intelligent machine is the last invention that man need ever make since it will lead, without further human invention, to the ultraintelligent machine... To up-date Voltaire: if God does not exist we shall have constructed him, or at any rate a reasonable approximation. Or will it be the Devil?

I.J. Good (1970)

Comment author: lukeprog 02 November 2011 08:31:42AM *  6 points [-]

In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.

George Dyson (1998)

Comment author: amcknight 02 November 2011 06:49:57PM 7 points [-]

Now we just need machines on our side and we'll have a cute little love-triangle.

Comment author: shminux 02 November 2011 11:25:23PM 1 point [-]

So then, even when we have an FAI, all three parties will be unhappy?

Comment author: James_Miller 02 November 2011 02:11:04PM 5 points [-]

We must develop as quickly as possible technologies that make possible a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it.

Stephen Hawking

Comment author: lukeprog 01 June 2013 05:02:44AM 2 points [-]

Members of the artificial intelligence community bear an ominous resemblance to... the Sorcerer's Apprentice. The apprentice learnt just enough magic for his master to save himself the trouble of performing an onerous task, but not quite enough to stop the spellbound buckets and brooms from flooding the castle.

Margaret Boden, Artificial Intelligence and Natural Man, p. 463

Comment author: lukeprog 26 March 2012 02:56:59AM 2 points [-]

Once a machine is designed that is good enough… it can be put to work designing an even better machine. At this point an "explosion" will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.

It seems probable that no mechanical brain will be really useful until it is somewhere near to the critical size. If so, there will be only a very short transition period between having no very good machine and having a great many exceedingly good ones. Therefore the work on simulation of artificial intelligence on general-purpose computers is especially important, because it will lengthen the transition period, and give human beings a chance to adapt to the future situation.

Good (1959)

Comment author: lukeprog 26 March 2012 12:43:32AM 2 points [-]

...it is always dangerous to try to relieve ourselves of the responsibility of understanding exactly how our wishes will be realized. Whenever we leave the choice of means to any servants we may choose then the greater the range of possible methods we leave to those servants, the more we expose ourselves to accidents and incidents. When we delegate those responsibilities, then we may not realize, before it is too late to turn back, that our goals have been misinterpreted, perhaps even maliciously. We see this in such classic tales of fate as Faust, the Sorcerer's Apprentice, or the Monkey's Paw by W.W. Jacobs.

[Another] risk is exposure to the consequences of self-deception. It is always tempting to say to oneself... that "I know what I would like to happen, but I can't quite express it clearly enough." However, that concept itself reflects a too-simplistic self-image, which portrays one's own self as [having] well-defined wishes, intentions, and goals. This pre-Freudian image serves to excuse our frequent appearances of ambivalence; we convince ourselves that clarifying our intentions is merely a matter of straightening-out the input-output channels between our inner and outer selves. The trouble is, we simply aren't made that way. Our goals themselves are ambiguous.

The ultimate risk comes when [we] attempt to take that final step — of designing goal-achieving programs that are programmed to make themselves grow increasingly powerful, by self-evolving methods that augment and enhance their own capabilities. It will be tempting to do this, both to gain power and to decrease our own effort toward clarifying our own desires. If some genie offered you three wishes, would not your first one be, "Tell me, please, what is it that I want the most!" The problem is that, with such powerful machines, it would require but the slightest accident of careless design for them to place their goals ahead of [ours]. The machine's goals may be allegedly benevolent, as with the robots of With Folded Hands, by Jack Williamson, whose explicit purpose was allegedly benevolent: to protect us from harming ourselves, or as with the robot in Colossus, by D.H.Jones, who itself decides, at whatever cost, to save us from an unsuspected enemy. In the case of Arthur C. Clarke's HAL, the machine decides that the mission we have assigned to it is one we cannot properly appreciate. And in Vernor Vinge's computer-game fantasy, True Names, the dreaded Mailman... evolves new ambitions of its own.

Minsky (1984)

Comment author: lukeprog 02 November 2011 10:21:45AM 2 points [-]

The human race, as we know it, is very likely in its end game; our period of dominance on earth is about to be terminated. We can try and reason and bargain with the machines which take over, but why should they listen when they are far more intelligent than we are?

Kevin Warwick (1998)

Comment author: ciphergoth 02 November 2011 08:50:34AM 6 points [-]

The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. [...] There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. [...] The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of scientists in this realm

AAAI PRESIDENTIAL PANEL ON LONG-TERM AI FUTURES: 2008-2009 STUDY, Interim Report from the Panel Chairs

Comment author: James_Miller 02 November 2011 02:00:47PM 4 points [-]

If you invent a breakthrough in artificial intelligence, so machines can learn, that is worth 10 Microsofts.

Bill Gates

Comment author: ata 05 November 2011 08:14:07AM *  2 points [-]

I can only assume he wasn't actually talking about an AGI-level breakthrough. I don't think I'd expect him to underestimate the impact or value of AGI that severely.

Comment author: lessdazed 05 November 2011 09:15:08AM 1 point [-]

This is the man who thought Windows Me was a good idea.

Comment author: cousin_it 26 March 2012 01:13:49AM *  -1 points [-]

Actually "10 Microsofts" may be an overestimate :-) Microsoft's success in its mission to make personal computers ubiquitous seems important to making AGI possible.

Comment author: lukeprog 02 November 2011 08:46:50AM *  3 points [-]

The survival of man may depend on the early construction of an ultraintelligent machine — or the ultraintelligent machine may take over and render the human race redundant or develop another form of life. The prospect that a merely intelligent man could ever attempt to predict the impact of an ultraintelligent device is of course unlikely but the temptation to speculate seems irresistible.

Julius Lukasiewicz (1974)

Comment author: Konkvistador 02 November 2011 11:29:28PM 4 points [-]

You a have a lot of quotes to share.

Comment author: lukeprog 13 January 2013 03:41:45AM 2 points [-]

Most people are strongly biased toward not wanting a computer to be able to think. Why? For a variety of reasons, the layperson's concept think has become so intertwined with the concept human that many people have an emotional reaction against the idea of nonhuman things thinking...

However, despite their strong feelings against the idea of thinking computers, most people have not thought about the issue very carefully and are at a loss to come up with a definition of thinking that would include most humans (babies, for example) and exclude all computers. It is sometimes humorous to hear the criteria that people who are unfamiliar with current work in artificial intelligence come up with, for they invariably choose something that computers can actually do. For example, many people propose the criterion "ability to learn from experience," only to be told that some robots and [AI] systems have fulfilled this criterion...

Usually the second choice is something like "creativity" ("coming up with something that people judge as useful that no person has thought of before"...). When told that most experts agree that computers have fulfilled this criterion, the person still does not admit the possibility of thinking machines.

Often the person abandons the attempt to derive an operational definition at this point and instead attempts to argue that computers could not possibly think because "humans built them and programmed them; the only follow their programs."... [but] we do not invoke the "origins" argument for other processes. Consider the process of heating food. Consider the question "Do ovens heat?" Do we say, "Ovens don't really heat, because ovens are built by people. Therefore, it only makes sense to say that people heat. Ovens don't really heat"? ...Of course not. The origin of something is totally irrelevant to its ability to carry out a particular process.

Keith Stanovich

Comment author: Grognor 14 February 2012 04:29:03AM *  2 points [-]

Before building FAI you built an oracle AI to help you. With its help, you found a mathematical definition of U, the utility of humanity’s extrapolated volition (or whatever). You were all pretty pleased with yourselves, but you didn’t stop there: you found a theory of everything, located humanity within it, and wrote down the predicate F(X) = “The humans run the program described by X.”

To top it off, with the help of your oracle AI you found the code for a “best possible AI”, call it FAI, and a proof of the theorem:

There exists a constant Best such that U ≤ Best, but F(FAI) implies U = Best.”

Each of these steps you did with incredible care. You have proved beyond reasonable doubt that U and F represent what you want them to.

You present your argument to the people of the world. Some people object to your reasoning, but it is airtight: if they choose to stop you from running FAI, they will still receive U ≤ Best, so why bother?

Now satisfied and with the scheduled moment arrived, you finally run FAI. Promptly the oracle AI destroys civilization and spends the rest of its days trying to become as confident as possible that Tic-Tac-Toe is really a draw (like you asked it to, once upon a time).

-Paul F. Christiano

Comment author: djcb 02 November 2011 01:22:06PM *  2 points [-]

I am the psychotherapist. Please, describe your problems. Each time you are finished talking, type RET twice.

I feel so tired

Why do you say?

I fall asleep all the time

Is it because of your plans that you say?

Yes, I work too hard

Maybe your life has something to do with this.

Indeed!

Can you elaborate on that?

Well, I work so can improve my life

What makes you believe?

I read it somewhere

Is it because of the people you hang around with that you say?

I bet it is because of that

I would appreciate it if you would continue.

[...]

my secretary will send you a bill.

-- emacs, doctor mode, the built-in psychotherapist

Comment author: lukeprog 29 December 2012 06:58:51PM *  1 point [-]

When considering whether or not to develop technology that leads to [machine superintelligence], the issue of its disposition to humans should be solved first.

James Barrat

Comment author: Grognor 04 May 2012 10:13:48AM 1 point [-]

A superintelligent computer designed to win at chess will keep trying to win at chess, ignoring any other goals along the way. It doesn't matter whether it's a million times smarter than Einstein, it's not going to start wanting to fight for freedom just because humans like that kind of thing any more than it's going to start wanting to have sex with pretty actors and actresses just because humans like that kind of thing. It's just going to be really, really good at winning at chess.

-Yvain

Comment author: lukeprog 25 April 2012 08:37:44PM 1 point [-]

In the event of a super-intelligent machine deciding upon a major change of environment, it might regard the biological society which had served it with no more consideration than a brewer gives to colonies of yeast when they have served their purpose in the brewery.

Cade (1966), p. 225

Comment author: lukeprog 25 April 2012 08:29:54PM 1 point [-]

Our own technological development is so rapid that we must accept the fact of imminent developments which are beyond our present understanding... It is therefore certain that a society only a few hundred years more advanced than our own could, if they thought it expedient, exterminate terrestrial life without effort...

It is useless to think of any form of defence against any action by superior intelligeneces... it would be just as futile as the occupants of an antheap declaring war against a bulldozer.

Cade (1966), p. 220

Page 223 includes this drawing of self-reproducing machines.

Comment author: lukeprog 25 April 2012 08:10:49PM *  1 point [-]

political leaders on Earth will slowly come to realize... that intelligent machines having superhuman thinking ability can be built. The construction of such machines, even taking into account all the latest developments in computer technology, would call for a major national effort. It is only to be expected that any nation which did put forth the financial and physical effort needed to build and programme such a machine, would also attempt to utilize it to its maximum capacity, which implies that it would be used to make major decisions of national policy. Here is where the awful dilemma arises. Any restriction to the range of data supplied to the machine would limit its ability to make effective political and economic decisions, yet if no such restrictions are placed upon the machine's command of information, then the entire control of the nation would virtually be surrendered to the judgment of the robot.

On the other hand, any major nation which was led by a superior, unemotional intelligence of any kind, would quickly rise to a position of world domination. This by itself is sufficient to guarantee that, sooner or later, the effort to build such an intelligence will be made — if not in the Western world, then elsewhere, where people are more accustomed to iron dictatorships.

...It seems that, in the forseeable future, the major nations of the world will have to face the alternative of surrendering national control to mechanical ministers, or being dominated by other nations which have already done this. Such a process will eventually lead to the domination of the whole Earth by a dictatorship of an unparalleled type — a single supreme central authority.

...the transition from biological evolution to mechanical evolution... could be rapid if some nation takes the plunge and goes in for government by computer, or very much slower if the dangers in this step are recognized, and man merely mechanized himself, by a gradual replacement of defective or inadequate biological components.

...There is little point in pursuing this line of thought any further, since a world of machines, governed by machines, for machines... will be as incomprehensible to us as would be the engines of a trawler to the ship's cat.

Cade, Other Worlds Than Ours (1966), pp. 214-219

Comment author: lukeprog 25 April 2012 05:53:39PM *  1 point [-]

future machines could be more intelligent than any man, and it is possible that a sort of mechanical evolution could be introduced, using existing computers to design their descendents, and so on, generation after generation, getting a little more brilliant at every step. It is in fact a theoretical possibility that, above a certain level of complexity, any computer can design a better computer than itself; this was first pointed out by the late John von Neumann.

...whenever mechanical or inorganic brains have been discussed, there have been passionate denials that such automata could ever think creatively. Some of these objections, perhaps a majority of them, arise from conflicts with the religious beliefs of the individual. Other negative views, expressed in some cases by noted scientists, result from an interpretation of mechanical thinking as a blow to their ego... a resentment that any mere thing of metal could be superior in any way to a human brain. And yet our brains are comparatively badly organized, inaccurate, and (except for memory) slow. They were not evolved for the purpose of abstract thought, but developed slowly through millions of years as a product of the struggle for survival... Bearing these things in mind, surely it is feasible that we shall eventually build mechanical minds of superhuman thinking ability, just as we now build bulldozers of superhuman muscle power?

Cade, Other Worlds Than Ours (1966), pp. 213-214

Comment author: lukeprog 25 April 2012 05:33:54PM 1 point [-]

Assume for the sake of argument that conscious beings have existed for some 20 million years: see what strides machines have made in the last thousand! May not the world last 20 million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?

Samuel Butler, 1872

(My own answer to Butler's question is "No" for the reason Moravec gave in 1988.)

Comment author: lukeprog 12 April 2012 01:20:43AM *  1 point [-]

As the human race, we are delicately positioned. We have the... ability... to create machines that will not only be as intelligent as humans but that will go on to be far more intelligent still. This will spell the end of the human race as we know it. Is that what we want? Should we not at least have an international body monitoring and even controlling what goes on?

When the first nuclear bombs were dropped on Japan, killing thousands of people, we took stock of our actions and realised the threat that such weapons posed to our existence. Despite the results achieved by the Hiroshima and Nagasaki bombs, even deadlier nuclear bombs have been built, much more powerful, much more accurate and much more intelligent. But with nuclear weapons we saw what they could do and we gave ourselves another chance.

With intelligent machines we will not get a second chance. Once the first powerful machine... is switched on, we will most likely not get the opportunity to switch it back off again. We will have started a time bomb ticking on the human race, and we will be unable to switch it off.

Kevin Warwick, March of the Machines (1997)

Comment author: lukeprog 14 March 2012 12:24:34PM 1 point [-]

Once a computer achieves a human level of intelligence, it will necessarily roar past it.

Ray Kurzweil, The Age of Spiritual Machines, p. 3

Comment author: Grognor 14 February 2012 08:32:57AM 1 point [-]

A point that K. Eric Drexler makes about nanotechnology research also applies to AI research. If a capability can be gained, eventually it will be gained and we can therefore not base humanity’s survival on AI never happening. Doing so is denying the inevitable. Instead, we can only hope to manage it as well as possible. Suppose we took the view that ethical people would not create AI. By definition, the only people creating it would be unethical people, who would then control what happened next -- so by opting out, all the ethical people would be doing would be handing power over to unethical people. I think this makes the position of ethical withdrawal ethically dubious.

-Paul Almond

Comment author: Dr_Manhattan 03 November 2011 02:47:33PM 1 point [-]

Wozniak declared to his audience that "we're already creating the superior beings, I think we lost the battle to the machines long ago."

http://au.ibtimes.com/articles/157802/20110606/steve-wozniak-humans-will-soon-surrender-superiority-to-machines.htm

Comment author: lukeprog 29 January 2014 09:38:35PM 0 points [-]

intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics...

...We can already see a glimmer of how computers might make [ethical] choices in Jaime Carbonell's model of subjective understanding. Carbonell showed how programs could be governed by heirarchies of goals, which would guide their reasoning processes in certain directions and not in others. Thus, it might very well be possible to formulate a hierarchy of goals that embody ethical concepts; the hard part, as always, would lie in formulating precisely what those concepts ought to be.

...the effort of understanding machine ethics may turn out to be invaluable not just as a matter of practicality, but for its own sake. The effort to endow computers with intelligence has led us to look deep within ourselves to understand what intelligence is. In much the same way, the effort to construct ethical machines will inevitably lead us to look within ourselves and reexamine our own conceptions of right and wrong. Of course, this... has been the domain of religion and philosophy for millennia. But then, pondering the nature of intelligence is not a new activity, either. The difference in each case is that, for the first time, we are having to explain ourselves to an entity that knows nothing about us. A computer is the proverbial Martian. And for that very reason, it is like a mirror: the more we have to explain ourselves, the more we may come to understand ourselves.

Waldrop (1987)

Comment author: lukeprog 29 January 2014 06:02:06PM 0 points [-]

Some philosophers and scholars who study and speculate on the [intelligence explosion]... maintain that this question is simply a matter of ensuring that AI is created with pro-human tendencies. If, however, we are creating an entity with greater than human intelligence that is capable of designing its own newer, better successors, why should we assume that human-friendly programming traits will not eventually fall by the wayside?

Al-Rodhan (2011), pp. 242-243, notices the stable self-modification problem.

Comment author: lukeprog 29 October 2013 03:27:48AM 0 points [-]

From Michie (1982):

Competent medical and biological research authorities in various parts of the world are concerned about genetic engineering... There is the possibility that as an accident, a side-effect of such research, some quite new and virulent micro-organism might multiply to an extent with which we are not able to cope. As a consequence, the Medical Research Council in Britain... recently supported a six-month moratorium on research in that specific area while the matter was studied more deeply and new safeguards drawn up.

It is conceivable that machine intelligence research could at some future stage raise legitimate concerns of that character. If that ever happened then I would certainly support such a 'holding operation'.

Comment author: lukeprog 04 July 2013 06:59:25AM 0 points [-]

In the long run, AI is the only science.

Woody Bledsoe, quoted in Machines Who Think.

Comment author: lukeprog 01 June 2013 04:41:47AM 0 points [-]

When machines acquire an intelligence superior to our own, they will be impossible to keep at bay... [Human-level AI] will threaten the very existence of human life as we know it... We should not... expect the main battles of the twenty-first century to be fought over such issues as the environment, overpopulation, or poverty. No, we should expect the fight to be about how we cope with [AI]; and the issue [of] whether we or they — our silicon challengers — control the future of the earth.

Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, p. 341

Comment author: lukeprog 24 May 2013 09:47:08PM 0 points [-]

"Speculations concerning the first ultraintelligent machine" started with the dubious sentence "The survival of man [sic] depends on the early construction of an ultraintelligent machine." That might have been appropriate during the Cold War. Today I suspect that the word survival should be replaced by extinction.

I.J. Good (1998)

Comment author: lukeprog 19 May 2013 02:23:29AM 0 points [-]

People tend to feel that intelligence is a good thing, even if they are unable to say exactly what it is. But its presence in a machine might not be an unmitigated blessing... there are purposes for which we might not want machines to be intelligent.

Peter Kugel

Comment author: lukeprog 29 December 2012 06:49:18PM 0 points [-]

Being highly capable and accomplished in AI doesn't inoculate you from naiveté about its perils.

James Barrat

Comment author: lukeprog 03 October 2012 03:00:09AM *  0 points [-]

The greatest task before civilisation at present is to make machines what they ought to be, the slaves, instead of the masters of men.

Havelock Ellis, 1922

Comment author: lukeprog 04 May 2012 01:49:34AM 0 points [-]

The singularity literature perhaps does a service by highlighting the ways in which AI developments could produce new degrees of intelligence and operational autonomy in AI agents—especially as current AI agents play an increasingly important role in the design of future AI agents. Bearing in mind the far-reaching implications of such possible future scenarios, the urgency of work in [machine ethics] to ensure the emergence of ‘friendly AI’ (Yudkowsky 2001, 2008) is all the more important to underline.

Steve Torrance (2012)

Comment author: lukeprog 25 April 2012 08:38:42PM 0 points [-]

The establishment of contact with any superior community would obviously be of unparalleled importance for the human race — socially, scientifically, and culturally. It could lead either to our rapidly attaining superior status ourselves, or it could lead to our extinction. It probably depends upon how well we conceal, or overcome, our own grave failings as social beings.

Cade (1966), p. 228

Comment author: lukeprog 21 February 2012 08:29:08AM 0 points [-]

Shorter I.J. Good intelligence explosion quote:

once an intelligent machine is built, it can be used for the design of an even better machine and so on; so that the invention of the first intelligent machine is the last invention that man need make.

Source.

Comment author: lukeprog 28 December 2011 02:39:42AM 0 points [-]

Technological progress is like an axe in the hands of a pathological criminal.

Albert Einstein

Comment author: lukeprog 09 April 2014 02:22:25AM 0 points [-]

Looking more closely, this much-duplicated "quote" seems to be a paraphrase of something he wrote in a letter to Heinrich Zaggler in the context of the first world war: "Our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal."

I do think about the AGI problem in much this way, though. E.g. in Just Babies, Paul Bloom wrote:

Families survive the Terrible Twos because toddlers aren’t strong enough to kill with their hands and aren’t capable of using lethal weapons. A two-year-old with the physical capacities of an adult would be terrifying.

I think our current civilization i like a two-year old. The reason we haven't destroyed ourselves yet, but rather just bit some fingers and ruined some carpets, is because we didn't have any civilization-lethal weapons. We've had nuclear weapons for a few decades now and not blown ourselves up yet but there were some close calls. In the latter half of the 21st century we'll acquire some additional means of destroying our civilization. Will we have grown up by then? I doubt it. Civilizational maturity progresses more slowly than technological power.