Comment author: pjeby 01 November 2010 06:14:25PM *  30 points [-]

No. It's really complex, and nobody in-the-know had time to really spell it out like that.

Actually, you can spell out the argument very briefly. Most people, however, will immediately reject one or more of the premises due to cognitive biases that are hard to overcome.

A brief summary:

  • Any AI that's at least as smart as a human and is capable of self-improving, will improve itself if that will help its goals

  • The preceding statement applies recursively: the newly-improved AI, if it can improve itself, and it expects that such improvement will help its goals, will continue to do so.

  • At minimum, this means any AI as smart as a human, can be expected to become MUCH smarter than human beings -- probably smarter than all of the smartest minds the entire human race has ever produced, combined, without even breaking a sweat.

INTERLUDE: This point, by the way, is where people's intuition usually begins rebelling, either due to our brains' excessive confidence in themselves, or because we've seen too many stories in which some indefinable "human" characteristic is still somehow superior to the cold, unfeeling, uncreative Machine... i.e., we don't understand just how our intuition and creativity are actually cheap hacks to work around our relatively low processing power -- dumb brute force is already "smarter" than human beings in any narrow domain (see Deep Blue, evolutionary algorithms for antenna design, Emily Howell, etc.), and a human-level AGI can reasonably be assumed capable of programming up narrow-domain brute forcers for any given narrow domain.

And it doesn't even have to be that narrow or brute: it could build specialized Eurisko-like solvers, and manage them at least as intelligently as Lenat did to win the Travelller tournaments.

In short, human beings have a vastly inflated opinion of themselves, relative to AI. An AI only has to be as smart as a good human programmer (while running at a higher clock speed than a human) and have access to lots of raw computing resources, in order to be capable of out-thinking the best human beings.

And that's only one possible way to get to ridiculously superhuman intelligence levels... and it doesn't require superhuman insights for an AI to achieve, just human-level intelligence and lots of processing power.

The people who reject the FAI argument are the people who, for whatever reason, can't get themselves to believe that a machine can go from being as smart as a human, to massively smarter in a short amount of time, or who can't accept the logical consequences of combining that idea with a few additional premises, like:

  • It's hard to predict the behavior of something smarter than you

  • Actually, it's hard to predict the behavior of something different than you: human beings do very badly at guessing what other people are thinking, intending, or are capable of doing, despite the fact that we're incredibly similar to each other.

  • AIs, however, will be much smarter than humans, and therefore very "different", even if they are otherwise exact replicas of humans (e.g. "ems").

  • Greater intelligence can be translated into greater power to manipulate the physical world, through a variety of possible means. Manipulating humans to do your bidding, coming up with new technologies, or just being more efficient at resource exploitation... or something we haven't thought of. (Note that pointing out weaknesses in individual pathways here doesn't kill the argument: there is more than one pathway, so you'd need a general reason why more intelligence doesn't ever equal more power. Humans seem like a counterexample to any such general reason, though.)

  • You can't control what you can't predict, and what you can't control is potentially dangerous. If there's something you can't control, and it's vastly more powerful than you, you'd better make sure it gives a damn about you. Ants get stepped on, because most of us don't care very much about ants.

Note, by the way, that this means that indifference alone is deadly. An AI doesn't have to want to kill us, it just has to be too busy thinking about something else to notice when it tramples us underfoot.

This is another inferential step that is dreadfully counterintuitive: it seems to our brains that of course an AI would notice, of course it would care... what's more important than human beings, after all?

But that happens only because our brains are projecting themselves onto the AI -- seeing the AI thought process as though it were a human. Yet, the AI only cares about what it's programmed to care about, explicitly or implicitly. Humans, OTOH, care about a ton of individual different things (the LW "a thousand shards of desire" concept), which we like to think can be summarized in a few grand principles.

But being able to summarize the principles is not the same thing as making the individual cares ("shards") be derivable from the general principle. That would be like saying that you could take Aristotle's list of what great drama should be, and then throw it into a computer and have the computer write a bunch of plays that people would like!

To put it another way, the sort of principles we like to use to summarize our thousand shards are just placeholders and organizers for our mental categories -- they are not the actual things we care about... and unless we put those actual things in to an AI, we will end up with an alien superbeing that may inadvertently wipe out things we care about, while it's busy trying to do whatever else we told it to do... as indifferently as we step on bugs when we're busy with something more important to us.

So, to summarize: the arguments are not that complex. What's complex is getting people past the part where their intuition reflexively rejects both the premises and the conclusions, and tells their logical brains to make up reasons to justify the rejection, post hoc, or to look for details to poke holes in, so that they can avoid looking at the overall thrust of the argument.

While my summation here of the anti-Foom position is somewhat unkindly phrased, I have to assume that it is the truth, because none of the anti-Foomers ever seem to actually address any of the pro-Foomer arguments or premises. AFAICT (and I am not associated with SIAI in any way, btw, I just wandered in here off the internet, and was around for the earliest Foom debates on OvercomingBias.com), the anti-Foom arguments always seem to consist of finding ways to never really look too closely at the pro-Foom arguments at all, and instead making up alternative arguments that can be dismissed or made fun of, or arguing that things shouldn't be that way, and therefore the premises should be changed

That was a pretty big convincer for me that the pro-Foom argument was worth looking more into, as the anti-Foom arguments seem to generally boil down to "la la la I can't hear you".

Comment author: MatthewB 02 November 2010 06:11:30AM *  9 points [-]

From Ben Goertzel,

And I think that theory is going to emerge after we've experimented with some AGI systems that are fairly advanced, yet well below the "smart computer scientist" level.

At the second Singularity Summit, I heard this same sentiment from Ben, Robin Hanson, and from Rodney Brooks, and from Cynthia Breazeal (at the Third Singularity Summit), and from Ron Arkin (at the "Human Being in an Inhuman Age" Conference at Bard College on Oct 22nd ¹), and from almost every professor I have had (or will have for the next two years).

It was a combination of Ben, Robin and several professors at Berkeley and UCSD which led me to the conclusion that we probably won't know how dangerous an AGI (CGI - Constructed General Intelligence... Seems to be a term I have heard used by more than one person in the last year instead of AI/AGI. They prefer it to AI, as the word Artificial seems to imply that the intelligence is not real, and the word Constructed is far more accurate) is until we have put a lot more time into building AI (or CI) systems that will reveal more about the problems they attempt to address.

Sort of like how the Wright Brothers didn't really learn how they needed to approach building an airplane until they began to build airplanes. The final Wright Flyer didn't just leap out of a box. It is not likely that an AI will just leap out of a box either (whether it is being built at a huge Corporate or University lab, or in someone's home lab).

Also, it is possible that AI may come in the form of a sub-symbolic system which is so opaque that even it won't be able to easily tell what can or cannot be optimized.

Ron Arkin (From Georgia Tech) discussed this briefly at the conference at Bard College I mentioned.

MB

¹ I should really write up something about that conference here. I was shocked at how many highly educated people so completely missed the point, and became caught up in something that makes The Scary Idea seem positively benign in comparison.

Comment author: Bgoertzel 02 November 2010 01:30:38AM 12 points [-]

I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.

However, I strongly suspect that when the argument is laid out formally, what we'll find is that

-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...

So, I think that the formalization will lead to the conclusion that

-- "we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity"

-- "we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity"

I.e., I strongly suspect the formalization

-- will NOT support the Scary Idea

-- will also not support complacency about AGI safety and AGI existential risk

I think the conclusion of the formalization exercise, if it's conducted, will basically be to reaffirm common sense, rather than to bolster extreme views like the Scary Idea....

-- Ben Goertzel

Comment author: MatthewB 02 November 2010 05:34:43AM 2 points [-]

I agree.

I doubt you would remember this, but we talked about this at the Meet and Greet at the Singularity Summit a few months ago (in addition to CBGBs and Punk Rock and Skaters).

James Hughes mentioned you as well at a Conference in NY where we discussed this very issue as well.

One thing that you mentioned at the Summit (well in conversation) was that The Scary Idea was tending to cause some paranoia among people who otherwise might be contributing more to the development of AI (of course, you also seemed pretty hostile to brain emulation too) as it tends to cause funding that could be going to AI to be slowed as a result.

Comment author: NancyLebovitz 31 October 2010 03:28:49PM 5 points [-]

He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems...

What are the more important ethical problems?

Comment author: MatthewB 02 November 2010 05:26:23AM -1 points [-]

Well... That is hard to communicate now, as I will need to extricate the problems from the specifics that were communicated to me (in confidence)...

Let's see...

1) That there is a dangerous political movement in the USA that seems to be preferring revealed knowledge to scientific understanding and investigation. 2) Poverty 3) Education 4) Hunger (I myself suffer from this problem - I am disabled, on a fixed income, and while I am in school again and doing quite well I still have to make choices sometimes between necessities... And, I am quite well off compared to some I know) 5) The lack of a political dialog and the preference for ideological certitude over pragmatic solutions and realistic uncertainty. 6) The fact that there exist a great amount of crime among the white collar crowd that goes both unchecked, and unpunished when it is exposed (Maddoff was a fluke in that regard). 7) The various "Wars" that we declare on things (Drugs, Terrorism, etc.) "War" is a poor paradigm to use, and it leads to more damage than it corrects (especially in the two instances I cited) 8) The real "Wars" that are happening right now (and not just those waged by the USA and allies)

Some of these were explicitly discussed.

Some will eventually be resolved, but that doesn't mean that they should be ignored until that time. That would be akin to seeing a man dying of starvation, while one has the capacity to feed him, yet thinking "Oh, he'll get some food eventually."

And, some may just be perennial problems with which we will have to deal with for some time to come.

Comment author: MatthewB 31 October 2010 05:13:16AM 2 points [-]

At the Singularity Summit's "Meet and Greet", I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.

I am FAR more in line with Ben's position than with Eliezer's (probably because both Ben and I are either Working or Studying directly on the "how to do" aspect of AI, rather than just concocting philosophical conundrums for AI, such as the "Paperclip Maximizer" scenario of Eliezer's, which I find highly dubious).

AI isn't going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven't seen) popping up suddenly as a threat.

At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems... And, he and I also discussed the specific problems of "The Scary Idea" that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.

Also, WRT this comment:

For another example, you can't train tigers to care about their handlers. No matter how much time you spend with them and care for them, they sometimes bite off arms just because they are hungry. I understand most big cats are like this.

You CAN train (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don't attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile... A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body... But, the point still stands... Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).

And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo's African Expedition) who works with big cats ALL the time.

Back to the point about AI.

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

That would be what Bertrand Russell calls "Gorging upon the Stew of every conceivable idea."

Comment author: gwern 06 October 2010 08:56:55PM 1 point [-]

Perhaps it would be better to wait until you get it and then post about how you got it, than to comment that you don't get it. That would be much more interesting to read.

Comment author: MatthewB 16 October 2010 11:33:49AM 0 points [-]

But, it would also not have the function of letting others who may struggle with certain concepts of knowing that they were not alone in struggling.

Comment author: MatthewB 16 October 2010 11:19:23AM 1 point [-]

That Candidate 2 (admitting that one is wrong is a win for an argument), is one of my oldest bits of helpful knowledge.

If one admits that one is wrong, one instantly ceases to be wrong (or at lest ceases to be wrong in the way that one was wrong. It could still be the case that the other person in an argument is also wrong, but for the purposes of this point, we are assuming that they are "correct"), because one is then in possession of more accurate (i.e. "right") information/knowledge.

Comment author: dclayh 09 August 2010 06:12:33AM *  3 points [-]

Okay, the people who promote a certain cluster of ideas centering on skepticism, rationalism, atheism and libertarianism, in the U.S. and culturally connected nations. (Which, yes, is quite close to "people I admire". I wasn't trying to claim it was especially surprising, although I am often surprised at just how tight it is.) In particular:

  • Eliezer
  • Robin Hanson
  • Steve Landsburg
  • Peter Thiel
  • Patri Friedman
  • James Randi
  • Penn Jillette (and Teller)
  • Adam Savage & Jamie Hynaman
  • Trey Parker & Matt Stone
  • Dawkins
  • Hitchens

and probably some others I can't think of right now.

Comment author: MatthewB 09 August 2010 03:19:26PM 4 points [-]

How About Eliezer, Peter Thiel, Peter Diamandis, done... I know that Peter Diamandis would NOT be turned away by Hitchens... Now, it is just a matter of getting ahold of a few mullionaire/billionaire types...

Comment author: MatthewB 09 August 2010 03:17:31PM 7 points [-]

I have had the EXACT same idea!

However, my plan was to contact his publicist through Alcor or one of the other Cryonics companies (all one of them I think)

Comment author: thomblake 09 August 2010 02:49:07PM 0 points [-]

Knowing its source code doesn't help; it has to run the code in order to know what result it gets.

This is false for some algorithms, and so I imagine it would be false for the entirety of the AI's source code. For example (ANSI C):

int i;
for (i=0; i<5; i++) ;

I know that i is equal to 5 after this code is executed, and I know that without executing the code in any sense.

Comment author: MatthewB 09 August 2010 03:14:16PM 0 points [-]

Now, I am not certain about this, but we have to examine that code before we know it's outcome.

While this isn't "Running" the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it.

As sort of meta-process if you will...

I could be so wrong about that though... eh...

Also, that code is useless really, except maybe as a wait function... It doesn't really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)...

Also, leaping from some code to the Entirety of an AI's source code seems to be a rather large leap.

Comment author: Jack 25 April 2010 07:00:46AM *  5 points [-]

It is unfortunate for God that Satan (Lucifer) had such a reasonable request "Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while." Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)

To be fair this stuff isn't Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It's just religious fiction.

...

Unless someone has declared John Milton a prophet and possessor of divine revelation. Which would be hilarious.

Comment author: MatthewB 25 April 2010 07:37:04AM 1 point [-]

It isn't stuff that made it into the modern canon, but in the Early Christian Church, Myth of this type appeared all over the place from the Jewish Sources, in an attempt to integrate it into various Christian Sects.

To be fair this stuff isn't Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It's just religious fiction.

Isn't it ALL just religious fiction?

View more: Prev | Next