Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: stcredzero 13 June 2012 08:27:31PM *  0 points [-]

Intelligent Design?

The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us.

Not the best formulated example. From what I've read in accounts of chimpanzee owners and minders, chimpanzees do negotiate with people. From what I've read and heard about dog owners, it seems to me that dogs also negotiate with their owners.

I suspect that the ability to negotiate requires less intelligence than the average human and overlapping interests.

What if the machines don’t want the same things we do?

If we have completely non-overlapping interests, then there is no hope. I find this to be highly unlikely at first, however, though more likely after AGI. (Remember, however, that the interests of "human beings" are certain to rapidly change as well.)

I think it would be inconceivable for a medieval peasant to meet someone completely uninterested in a year's supply of wheat. Most of us reading this wouldn't ever ask for that and wouldn't know what to do with it, without some Google searching. But we'd still have interests in common as human beings and fellow vertebrates. We even have interests in common with our dogs.

I think we'd have at least a common interest in not being in the vicinity of a supernova, for example. (At least at first.)

Of course we can't have interests in common with an ant. I don't think an ant is even aware of its own interests in the way humans or even dogs are. I wonder if the magical seeming things people sometimes seem to ascribe to future AGI are not really different degrees of intelligence, but more along the lines of a "different order of awareness." What does that mean? Is the existence of such even falsifiable?

Superhuman intelligence is not magic. It will only seem that way to other insufficiently advanced intelligences.

Likewise, self-improving machines could perform scientific experiments and build new technologies much faster and more intelligently than humans can. Curing cancer, finding clean energy, and extending life expectancies would be child’s play for them.

I find this to be somewhat along the lines of magical thinking. Cancer is not one disease, and is in fact just one general aspect of extending life expectancies. I don't think something on that level is ever going to be "child's play." At the point an individual has the many times research bandwidth of all the world's PhDs, "child's play" might well have become a meaningless and archaic metaphor.

Comment author: MatthewB 09 August 2012 05:58:46AM 0 points [-]

Also, don't forget that humans will be improving just as rapidly as the machines.

My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.

Comment author: Tuxedage 12 June 2012 06:38:45PM 4 points [-]

Similarly. My previous beliefs about Glen Beck points towards a devout Christian fundamentalist. I would not have considered the fact that he would support the singularity, much less take it seriously. It seems I have to update my beliefs quite a bit.

Comment author: MatthewB 09 August 2012 05:52:02AM 0 points [-]

He believes that the Singularity is proof that the Universe was created by an Intelligent Creator (who happens to be the Christian God), and that it is further evidence of YEC.

Comment author: Jack 13 June 2012 03:09:52PM *  3 points [-]

LWers suck at politics.

LWers are great at politics. It's just that politics suck for LWers.

Edit: Since my meaning wasn't clear: Mind-killing is a feature not a bug of politics. It is not a truth-seeking activity and getting caught up in the signaling, the motivated thinking and the tribalism is not "being bad at politics". It's the opposite.

Comment author: MatthewB 09 August 2012 05:49:22AM 1 point [-]

I think the comment that LWer suck at Politics is the more apt description.

Politics is the art of the possible, and that it deals with WHAT IS, regardless of whether that is "rational."

And attempting to demand that it conform to rationality standards dictated by this community guarantees that this community will lack political clout.

Especially if it becomes known that the main beneficiaries and promoters of the Singularity have a particularly pathological politics.

Peter Thiel may well be a Libertarian Hero, but his name is instant death in even mainstream GOP circles, and he is seen as a fascist by the progressives.

Glenn Beck is seen as a dangerous and irrationally delusional ideologue by mainstream politicians.

That sort of endorsement isn't going to help the cause if it becomes well known.

It will tar the Singularity as an ideological enclave of techno-supremists.

NO ONE at Less Wrong seems to be aware of the stigma attached to the Singularity after the performance of David Rose at the "Human Being in an Inhuman World" conference at Bard College in 2010. I was there, and got to witness the reactions of academics and political analysts from New York and Washington DC (some very powerful people in policy circles) who sat, mouths hanging aghast, at what David Rose was saying.

When these people discover that Glenn Beck is promoting the Singularity (and Glenn Beck has some very specific agendas in promoting it, that are very selfish and probably pretty offensive to the ideals of Less Wrong) these people will be even more convinced that the Singularity is a techno-cult composed of some very dangerous individuals.

Comment author: CaveJohnson 12 June 2012 08:24:24PM *  9 points [-]

People talking about how low status Glenn Beck is need to realize that numerically far more people take Glenn Beck seriously than Kurzweil. Just because the Brahmin (Moldbug's terminology) hate him, dosen't mean he isn't influential and popular in among the class of people who find themselves vulnerable to be mislead to react badly to the Singularity.

Comment author: MatthewB 09 August 2012 05:38:08AM 1 point [-]

Being influential is not necessarily a good thing.

Especially when Glenn Beck's influence is in delusional conspiracy theories, and evangelical christianity, and Young Earth Creationism.

Comment author: Multiheaded 13 June 2012 05:13:48AM *  9 points [-]

Clearly Waitingforgodel was talking about, y'know conservative people - with a conservative general mindset that extends into politics - and not progress-loving, ultra-capitalist right-wingers that get lumped in as "Conservatives". The distinction looks obvious enough to me.

And I'm not at all convinced that we should prefer the latter's enthusiasm to the former's anger.

Comment author: MatthewB 09 August 2012 05:35:32AM 0 points [-]

Glenn Beck is hardly someone whose enthusiasm you should welcome.

He has a creationist agenda that he has found a way to support with the ideas surrounding the topic of the Singularity.

Comment author: Kawoomba 13 June 2012 05:53:26PM 7 points [-]

Finally, a palpable sign of success! I'm so happy that you guys are finally getting your message across :o)

Comment author: MatthewB 09 August 2012 05:33:57AM 0 points [-]

This is not exactly "success."

There are some populations that will pervert the things they get in their hands.

Comment author: MatthewB 09 August 2012 05:33:06AM 0 points [-]

Glenn Beck was one of the first TV personalities to interview Ray.

The Interview is on YouTube, and is very informative as to Glenn's objectives and Agenda.

Primarily, he wishes to use the ideology behind the Singularity as support for "Intelligent Design." In the inteview, he makes an explicit statement to that effect.

Glenn Beck is hardly "rational" as per the definition of "Less Wrong."

Comment author: xamdam 07 November 2010 05:24:25PM 0 points [-]

No, I am not aware of any facts about progress in decision theory

Please take a look here: http://wiki.lesswrong.com/wiki/Decision_theory

As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it's quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).

Comment author: MatthewB 08 November 2010 05:37:42AM 0 points [-]

Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.

Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start... And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that - It's where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).

While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.

This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.

I just don't see that happening. I don't see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.

I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit "dull witted" in comparison.

I don't so much buy the "Ant/Amoeba to Human" comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don't... They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn't mean it is necessarily so, but it does seem to be more than less likely.

And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.

Comment author: NancyLebovitz 02 November 2010 09:25:54AM 1 point [-]

I misread you as saying that important ethical problems about FAI were being ignored, but yes, the idea that FAI is the most important thing in the world leaves quite a bit out, and not just great evils. There's a lot of maintenance to be done along the way to FAI.

Madoff's fraud was initiated by a single human being, or possibly Madoff and his wife. It was comprehensible without adding a lot of what used to be specialist knowledge. It's a much more manageable sort of crime than major institutions becoming destructively corrupt.

Comment author: MatthewB 07 November 2010 04:51:04PM 0 points [-]

I think major infrastructure rebuilding is probably closer to the case than "maintenance"

Comment author: xamdam 03 November 2010 11:41:35PM 1 point [-]

It is going to be next to impossible to solve the problem of "Friendly AI" without first creating AI systems that have social cognitive capacities. Just sitting around "Thinking" about it isn't likely to be very helpful in resolving the problem.

I am guessing that this unpacks to "to create and FAI you need some method to create AGI. For the later we need to create AI systems with social cognitive capabilities (whatever that means - NLP?)". Doing this gets us closer to FAI every day, while "thinking about it" doesn't seem to.

First, are you factually aware that some progress has been made in a decision theory that would give some guarantees about the future AI behavior?

Second, yes, perhaps whatever you're tinkering with is getting closer to an AGI which is what FAI runs on. It is also getting us closer to and AGI which is not FAI, if the "Thinking" is not done first.

Third, if the big cat analogy did not work for you, try training a komodo dragon.

Comment author: MatthewB 07 November 2010 04:48:34PM -1 points [-]

Yes, that is close to what I am proposing.

No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people's behaviors in the future than with AI. People are improving systems as well.

As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:

"Gorging upon the stew of..."

View more: Next