AGI Quotes
Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.
- Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote comments/posts on LW/OB.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (88)
Vernor Vinge
"Sorry Arthur, but I'd guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon." - Dan Clemmensen, SL4
Source: http://www.sl4.org/archive/0203/3081.html
This is one of the earliest quotes I read that made it click that nothing I could do with my life would have greater impact than pursuing superintelligence.
Samuel Butler (1872)
Arthur C. Clarke (1968)
Eliezer Yudkowsky (2008)
EY changed it in the published version to:
"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."
My favorite paraphrase is my own:
"The AI does not hate you, nor does it love you, but you are made of atoms it can use for something else."
I like the rhythm of this one best. It can be sung.
Let us walk together to the kirk, and all together pray, while each to our great AI bends — old men, and babes, and loving friends, and youths and maidens gay! :D
Whether the AI loves -- or hates, you cannot fathom, but plans it has indeed for your atoms.
Alan Turing (1951)
"In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that. To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond "audacity" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority." - Eliezer Yudkowsky
Vernor Vinge
... in original.
Good (1965)
The use of "unquestionably" in this quote has always irked me a bit, despite the fact that I find the general concept reasonable.
Samuel Butler (1863)
Edsger Dijkstra (1984)
I don't understand this.
It is seemingly easy to get stuck in arguments over whether or not machines can "actually" think.
It is sufficient to assess the effects or outcomes of the phenomenon in question.
By sidestepping the question of what, exactly, it means to "think",
we can avoid arguing over definitions, yet lose nothing of our ability to model the world.
Does a submarine swim? The purpose of swimming is to propel oneself through the water. A nuclear powered submarine can propel itself through the oceans at full speed for months at a time. It achieves the purpose of swimming, and does so rather better than a fish, or a human.
If the purpose of thinking is isomorphic to:
Model the world in order to formulate plans for executing actions which implement goals.
Then, if a machine can achieve the above we can say it achieves the purpose of thinking,
akin to how a submarine successfully achieves the purpose of swimming.
Discussion of whether the machine really thinks is now superfluous.
It is a similar idea as that proposed by Turing. If you have submarines, and they move through the water and do exactly what you want them to do, then it is rather pointless to ask if what they're doing is "really swimming". And the arguments on both sides of the "swimming" dispute will make reference to fish.
Norbert Weiner (1949)
I.J. Good (1970)
Garet Garrett (1926)
Konstantin Kakaes
T.M. Georges (2004)
Where did he say this? A search turns up only this page. Thanks!
In Digital Soul, if I recall correctly.
Samuel Butler (1880)
Theodore Roosevelt
Michael Anissimov
... I wonder how "alone" I am in the notion that AGI causing human extinction may not be a net negative, in that so long as it is a sentient product of human endeavors it is essentially a "continuation" of humanity.
Two problems: An obnoxious optimizing process isn't necessarily sentient. And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?
If it helps ask yourself how you feel about a human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of "In the Prisoners' Dilemma, Humanity Defects!" That sounds pretty bad doesn't it? Now note that the AGI expansion is probably worse than that.
Hence my caveat.
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
Not especially, no.
It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stability. As a result, and to help prevent problems, everyone but a tiny elite is denied any form of life-extension technology. Even the elite has their lifespan only extended to about a 130 to prevent anyone from accumulating too much power and threatening the standard oligarchy. Similarly, new ideas for businesses are ruthlessly suppressed. Most people will have less mobility in this setting than an American living today. Planets will be ruthlessly terraformed and then have colonists forcively shipped their to help start the new groups. Most people have the equivalent of reality TV shows and the hope of the winning the lottery to entertain themselves. Most of the population is so ignorant that they don't even realize that humans originally came from a single planet.
If this isn't clear, I'm trying to make this about as dystopian as I plausibly can. If I haven't succeeded at that, please imagine what you would think of as a terrible dystopia and apply that. If really necessary, imagine some puppy and kitten torturing too.
"There are lots of people who think that if they can just get enough of something, a mind will magically emerge. Facts, simulated neurons, GA trials, proposition evaluations/second, raw CPU power, whatever. It's an impressively idiotic combination of mental laziness and wishful thinking." - Michael Wilson
Why we're doomed reason #692...
Here is Hugo de Garis, in the opening of The Artilect War:
I.J. Good (1970)
George Dyson (1998)
Now we just need machines on our side and we'll have a cute little love-triangle.
So then, even when we have an FAI, all three parties will be unhappy?
Stephen Hawking
Margaret Boden, Artificial Intelligence and Natural Man, p. 463
Good (1959)
Minsky (1984)
Kevin Warwick (1998)
AAAI PRESIDENTIAL PANEL ON LONG-TERM AI FUTURES: 2008-2009 STUDY, Interim Report from the Panel Chairs
Bill Gates
I can only assume he wasn't actually talking about an AGI-level breakthrough. I don't think I'd expect him to underestimate the impact or value of AGI that severely.
This is the man who thought Windows Me was a good idea.
Actually "10 Microsofts" may be an overestimate :-) Microsoft's success in its mission to make personal computers ubiquitous seems important to making AGI possible.
Julius Lukasiewicz (1974)
You a have a lot of quotes to share.
Keith Stanovich
-Paul F. Christiano
I am the psychotherapist. Please, describe your problems. Each time you are finished talking, type RET twice.
I feel so tired
Why do you say?
I fall asleep all the time
Is it because of your plans that you say?
Yes, I work too hard
Maybe your life has something to do with this.
Indeed!
Can you elaborate on that?
Well, I work so can improve my life
What makes you believe?
I read it somewhere
Is it because of the people you hang around with that you say?
I bet it is because of that
I would appreciate it if you would continue.
[...]
my secretary will send you a bill.
-- emacs, doctor mode, the built-in psychotherapist
James Barrat
-Yvain
Cade (1966), p. 225
Cade (1966), p. 220
Page 223 includes this drawing of self-reproducing machines.
Cade, Other Worlds Than Ours (1966), pp. 214-219
Cade, Other Worlds Than Ours (1966), pp. 213-214
Samuel Butler, 1872
(My own answer to Butler's question is "No" for the reason Moravec gave in 1988.)
Kevin Warwick, March of the Machines (1997)
Ray Kurzweil, The Age of Spiritual Machines, p. 3
-Paul Almond
Wozniak declared to his audience that "we're already creating the superior beings, I think we lost the battle to the machines long ago."
http://au.ibtimes.com/articles/157802/20110606/steve-wozniak-humans-will-soon-surrender-superiority-to-machines.htm
Waldrop (1987)
Al-Rodhan (2011), pp. 242-243, notices the stable self-modification problem.
From Michie (1982):
Woody Bledsoe, quoted in Machines Who Think.
Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, p. 341
I.J. Good (1998)
Peter Kugel
James Barrat
Havelock Ellis, 1922
Steve Torrance (2012)
Cade (1966), p. 228
Shorter I.J. Good intelligence explosion quote:
Source.
Albert Einstein
Looking more closely, this much-duplicated "quote" seems to be a paraphrase of something he wrote in a letter to Heinrich Zaggler in the context of the first world war: "Our entire much-praised technological progress, and civilization generally, could be compared to an axe in the hand of a pathological criminal."
I do think about the AGI problem in much this way, though. E.g. in Just Babies, Paul Bloom wrote:
I think our current civilization i like a two-year old. The reason we haven't destroyed ourselves yet, but rather just bit some fingers and ruined some carpets, is because we didn't have any civilization-lethal weapons. We've had nuclear weapons for a few decades now and not blown ourselves up yet but there were some close calls. In the latter half of the 21st century we'll acquire some additional means of destroying our civilization. Will we have grown up by then? I doubt it. Civilizational maturity progresses more slowly than technological power.