Comment author: xamdam 07 November 2010 10:42:42PM 7 points [-]

Great post, thanks.

I try to remember my heroes for the specific heroic act or trait, e.g. Darwin's conscientious collection of disconfirming evidence.

Comment author: MatthewB 07 November 2010 04:48:34PM -1 points [-]

Yes, that is close to what I am proposing.

No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people's behaviors in the future than with AI. People are improving systems as well.

As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:

"Gorging upon the stew of..."

Comment author: xamdam 07 November 2010 05:24:25PM 0 points [-]

No, I am not aware of any facts about progress in decision theory

Please take a look here: http://wiki.lesswrong.com/wiki/Decision_theory

As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it's quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).

Comment author: cousin_it 14 August 2010 01:54:22PM 0 points [-]

Don't know about you, but I anticipate acausal control, to a degree. I have a draft post titled "Taking UDT Seriously" featuring such shining examples as: if a bully attacks you, you should try to do maximum damage while disregarding any harm to yourself, because it's good for you to be predicted as such a person. UDT is seriously scary when applied to daily life, even without superintelligences.

Comment author: xamdam 07 November 2010 04:17:21PM 1 point [-]

Taking UDT Seriously

Can you post this in the discussion area?

Comment author: cousin_it 05 November 2010 11:14:13AM *  10 points [-]

I'm not Eliezer, but will try to guess what he'd have answered. The awesome powers of your mind only feel like they're about "symbols", because symbols are available to the surface layer of your mind, while most of the real (difficult) processing is hidden. Relevant posts: Detached Lever Fallacy, Words as Mental Paintbrush Handles.

Comment author: xamdam 05 November 2010 08:48:02PM *  1 point [-]

Thanks.

The posts (at least the second one) seem to point that symbolic reasoning is overstated and at least some reasoning is clearly non-symbolic (e.g. visual).

In this context the question is whether the symbolic processing (there is definitely some - math, for example) gave pre-humans the boost that allowed the huge increase in computing power, so I am not seeing the contradiction.

Comment author: xamdam 05 November 2010 05:17:38PM 1 point [-]

Would being seen be an advantage for them? (answering question with a question, still...)

Comment author: xamdam 05 November 2010 03:37:51PM 0 points [-]

Freud once said that Jung was a great psychologist, until he became a prophet.

Comment author: Eliezer_Yudkowsky 04 November 2010 09:10:33PM 22 points [-]

And hardware overhang (faster computers developed before general cognitive algorithms, first AGI taking over all the supercomputers on the Internet) and fast infrastructure (molecular nanotechnology) and many other inconvenient ideas.

Also if you strip away the talk about "imbalance" what it works out to is that there's a self-contained functioning creature, the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability. Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself. Chimpanzees were not "lopsided", they were complete packages designed for an environment; it turned out there were things that could be done which created a huge increase in optimization power (calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken) and perhaps there are yet more things like that, such as, oh, say, self-modification of code.

Comment author: xamdam 04 November 2010 10:07:41PM 9 points [-]

calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken

Interesting. Can you elaborate or link to something?

Comment author: MatthewB 31 October 2010 05:13:16AM 2 points [-]

At the Singularity Summit's "Meet and Greet", I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.

I am FAR more in line with Ben's position than with Eliezer's (probably because both Ben and I are either Working or Studying directly on the "how to do" aspect of AI, rather than just concocting philosophical conundrums for AI, such as the "Paperclip Maximizer" scenario of Eliezer's, which I find highly dubious).

AI isn't going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven't seen) popping up suddenly as a threat.

At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems... And, he and I also discussed the specific problems of "The Scary Idea" that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.

Also, WRT this comment:

For another example, you can't train tigers to care about their handlers. No matter how much time you spend with them and care for them, they sometimes bite off arms just because they are hungry. I understand most big cats are like this.

You CAN train (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don't attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile... A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body... But, the point still stands... Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).

And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo's African Expedition) who works with big cats ALL the time.

Back to the point about AI.

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

That would be what Bertrand Russell calls "Gorging upon the Stew of every conceivable idea."

Comment author: xamdam 03 November 2010 11:41:35PM 1 point [-]

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

I am guessing that this unpacks to "to create and FAI you need some method to create AGI. For the later we need to create AI systems with social cognitive capabilities (whatever that means - NLP?)". Doing this gets us closer to FAI every day, while "thinking about it" doesn't seem to.

First, are you factually aware that some progress has been made in a decision theory that would give some guarantees about the future AI behavior?

Second, yes, perhaps whatever you're tinkering with is getting closer to an AGI which is what FAI runs on. It is also getting us closer to and AGI which is not FAI, if the "Thinking" is not done first.

Third, if the big cat analogy did not work for you, try training a komodo dragon.

Comment author: timtyler 30 October 2010 05:23:28PM *  4 points [-]

Current practice in AI research seems to be to publish everything and take no safety precautions whatsoever, and that is definitely not good.

Most of the compaines involved (e.g. Google, James Harris Simons) publish little or nothing relating so their code in this area publicly - and few know what safeguards they employ. The government security agencies potentially involved (e.g. the NSA) are even more secretive.

Comment author: xamdam 03 November 2010 10:11:43PM 1 point [-]

Simons is an AI researcher? News to me. Clearly his fund uses machine learning, but there is an ocean between that and AGI (besides plenty of funds use ML also, DE Shaw and many others).

Comment author: Vaniver 03 November 2010 06:29:15AM 2 points [-]

Useful as an example of the difficulty of building consequentialists- "we can't even solve chess, for crying out loud!"- but I don't see it as particularly useful as an explanation of human values.

Although, sifting it more I think I may see the gemstones that you may be seeing. The value seems to be in saying "the goal of chess is completely described: be in a position where your enemy cannot prevent you from capturing his king" and "the goal of evolution is completely described: maximize inclusive genetic fitness", and then comparing the subgoals explicitly. Status, say, is the analogy of board position- it only leads to higher genetic fitness on average in some broad way, but it's a cheap and effective heuristic for doing so, just like good board position is a cheap and effective (but not guaranteed!) heuristic for winning at chess.

Comment author: xamdam 03 November 2010 06:38:23PM 1 point [-]

Status, say, is the analogy of board position- it only leads to higher genetic fitness on average in some broad way, but it's a cheap and effective heuristic for doing so, just like good board position is a cheap and effective (but not guaranteed!) heuristic for winning at chess

Yep, basically what I was getting at.

View more: Prev | Next