MatthewB comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: MatthewB 31 October 2010 05:13:16AM 2 points [-]

At the Singularity Summit's "Meet and Greet", I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.

I am FAR more in line with Ben's position than with Eliezer's (probably because both Ben and I are either Working or Studying directly on the "how to do" aspect of AI, rather than just concocting philosophical conundrums for AI, such as the "Paperclip Maximizer" scenario of Eliezer's, which I find highly dubious).

AI isn't going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven't seen) popping up suddenly as a threat.

At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems... And, he and I also discussed the specific problems of "The Scary Idea" that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.

Also, WRT this comment:

For another example, you can't train tigers to care about their handlers. No matter how much time you spend with them and care for them, they sometimes bite off arms just because they are hungry. I understand most big cats are like this.

You CAN train (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don't attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile... A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body... But, the point still stands... Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).

And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo's African Expedition) who works with big cats ALL the time.

Back to the point about AI.

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

That would be what Bertrand Russell calls "Gorging upon the Stew of every conceivable idea."

Comment author: xamdam 03 November 2010 11:41:35PM 1 point [-]

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

I am guessing that this unpacks to "to create and FAI you need some method to create AGI. For the later we need to create AI systems with social cognitive capabilities (whatever that means - NLP?)". Doing this gets us closer to FAI every day, while "thinking about it" doesn't seem to.

First, are you factually aware that some progress has been made in a decision theory that would give some guarantees about the future AI behavior?

Second, yes, perhaps whatever you're tinkering with is getting closer to an AGI which is what FAI runs on. It is also getting us closer to and AGI which is not FAI, if the "Thinking" is not done first.

Third, if the big cat analogy did not work for you, try training a komodo dragon.

Comment author: MatthewB 07 November 2010 04:48:34PM -1 points [-]

Yes, that is close to what I am proposing.

No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people's behaviors in the future than with AI. People are improving systems as well.

As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:

"Gorging upon the stew of..."

Comment author: xamdam 07 November 2010 05:24:25PM 0 points [-]

No, I am not aware of any facts about progress in decision theory

Please take a look here: http://wiki.lesswrong.com/wiki/Decision_theory

As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it's quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).

Comment author: MatthewB 08 November 2010 05:37:42AM 0 points [-]

Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.

Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start... And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that - It's where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).

While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.

This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.

I just don't see that happening. I don't see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.

I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit "dull witted" in comparison.

I don't so much buy the "Ant/Amoeba to Human" comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don't... They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn't mean it is necessarily so, but it does seem to be more than less likely.

And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.