Comment author: MugaSofer 25 April 2014 09:35:39PM 2 points [-]

I'm on a mobile device right now - I'll go over your arguments, links, and videos in more detail later, so here are my immediate responses, nothing more.

In fact, there is a blind spot in most people's realities that's filled by their evolutionarily-determined blindness to sociopaths.

Wait, why would evolution make us vulnerable to sociopaths? Wouldn't patching such a weakness be an evolutionary advantage?

This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons...

Wouldn't a total lack of mirror neurons make people much harder to predict, crippling social skills?

I'm not suggesting that you or anyone else in this conversation is "bad" or "ignorant," but just that you might not be referencing an accurate picture of political thought, political reality, political networks.

"Ignorant" is not, and should not be, a synonym for "bad". If you have valuable information for me, I'll own up to it.

The world still doesn't have much of a problem with the "initiation of force" or "aggression."

Those strike me as near-meaningless terms, with connotations chosen specifically so people will have a problem with them despite their vagueness.

That he chose to follow "the path of compliance" "the path of obedience" and "the path of nonresistance" (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).

Did you accidentally a word there? I don't follow your point.

The reason I still judge the Nazis ... they chose a mindless interpretation of "the will to power." The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler's rise to power.

And clearly, they all deliberately chose the suboptimal choice, in full knowledge of their mistake.

Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.

You're joking, right?

Statistical likelihood of being murdered by your own government, during peacetime, worldwide.

i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)

Comment author: More_Right 26 April 2014 09:00:13AM -2 points [-]

i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)

I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.

If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path (to the greatest extent allowed by the nation's many "law students" who become "licensed lawyers." What if all those law students had become STEM majors, and built better machines and technologies?) I dare say that that simple desire for an easier paycheck might be the cause of sociopathy on a grand scale. I have my own theories about this, but for a moment, nevermind _why.

If societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we're playing in that fall. If societies don't fall entirely to over-parasitism, then what forces ameliorate parasitism?

And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn't take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.

But I think R. J. Rummel's graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we're not on the same course. Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? ...Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.

But sure, the graph doesn't mean anything if technology makes us smart enough to break free from past cycles. In that case, the warning didn't need to be sounded as loudly as Rummel has sounded it.

...And I don't care if the graph looks "skeevy." That's an ad-hominem attack that ignores the substance of the warning. I encourage you to familiarize yourself with his entire site. It contains a lot of valuable information. The more you rebel against the look and feel of the site, the more I encourage you to investigate it, and consider that you might be rebelling against the inconsequential and ignoring the substance.

Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.

Comment author: Stuart_Armstrong 25 April 2014 09:48:55AM 1 point [-]

at the FHI, we disagree whether an ecology of AIs would make good AIs behave bad, or bad ones behave good. The disagreement matches our political opinions on free markets and competition, so it probably not informative.

Comment author: More_Right 26 April 2014 08:44:28AM 0 points [-]

An interesting question to ask is "how many people who favor markets understand the best arguments against them, and vice versa." Because we're dealing with humans here, my suspicion is that if there's a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.

Then again, we're supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.

It's a debate worth having. Also, one need not have competition to have power decentralization. There is a disincentive aspect added to making violence impossible that makes "cooperation" more likely than "antagonistic competition." (Ie: Some sociopaths choose to cooperate with other strong sociopaths because they can see that competing with them would likely cause their deaths or their impoverishment. However, if you gave any one of those sociopaths clear knowledge that they held absolute power ....the result would be horrible domination.)

Evolution winds up decentralizing power among relative equals, and the resulting "relative peace" (for varying reasons) then allows for _some of the reasons to be "good reasons." (Ie: Benevolent empaths working together for a better world.) This isn't to say that everything is rosy under decentralization. Decentralization may work more poorly than an all-powerful benevolent monarch.

It's just that benevolent monarchs aren't that likely given who wants to be a monarch, and who tries hardest to win any "monarch" positions that open up.

Such a thing might not be impossible, but if you make a mistake pursuing that course of action, the result tends to be catastrophic, whereas decentralization might be "almost as horrible and bloody," but at least offers the chance of continued survival, and the chance of survival allows for those who survive to "optimize or improve in the future."

"There may be no such thing as a utopia, but if there isn't, then retaining the chance for a utopia is better than definitively ruling one out." More superintelligences that are partly benevolent may be better than one superintelligence that has the possibility of being benevolent or malevolent.

Comment author: More_Right 26 April 2014 08:05:47AM *  2 points [-]

"how generalization from fictional evidence is bad"

I don't think this is a universal rule. I think this is very often true because humans tend to generalize so poorly, tend to have harmful biases based on evolution, and tend to write and read bad (overly emotional, irrational, poorly-mapped-to-reality) fiction.

Concepts can come from anywhere. However, most fiction maps poorly to reality. If you're writing nonfiction, at least if you're trying to map to reality itself, you're likely to succeed in at least getting a few data points from reality correct. Then again, if you're writing nonfiction, you might be highly adept at "lying with facts" (getting all the most granular "details" of a hierarchical structure correct, while getting the entire hierarchical structure wrong at greater levels of abstraction).

As one example of a piece of fiction that maps very closely to reality, and to certain known circumstances, I cite "Unintended Consequences" by John Ross. It's a novel about gun rights that is chock-full of factual information, because the man who wrote it is something of a renaissance man, and an engineer, who comprehends material reality. As an example of a piece of fiction that maps poorly to reality in some of its details, I cite "Atlas Shrugged," by Ayn Rand (the details may be entertaining, and may often illustrate a principle really well, but they often could not happen, --such as "a small band of anti-government people are sheltered from theft by a 'ray screen'"). The "ray screen" plot device was written before modern technology (such as GPS, political "radar" and escalation, etc.) ruled it out as a plot device.

John Ross knows a lot more about organizational strategy, firearms, and physics than Rand did. Also, he wrote his novel at a later date, when certain trends in technological history had already come into existence, and others had died out as possible. Ross is also a highly logical guy. (Objectivist John Hospers, clearly an Ayn Rand admirer, compares the two novels here.)

You can attack some of the ideas in Unintended Consequences for not mapping to reality closely, or for being isolated incidences of something that's possible, but highly unlikely. But you can attack far fewer such instances in his novel than you can in Rand's.

Now, take the "Rich Dad, Poor Dad" books. Such books are "nonfiction" but they are low in hierarchical information, and provide a lot of obvious and redundant information.

So "beware using non fiction as evidence, not only because it's deliberately wr ong in particular ways to make it more interesting" but more importantly "because it does not provide a probabilistic model of what happened" (especially if the author is an idiot whose philosophy doesn't map closely to reality) "and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence."

I think nonfiction written by humans is far more damaging than fiction is. In fact, human language (according to Ray Kurzweil, in "The Singularity is Near" and "The Age of Spiritual Machines," and those, such as Hans Moravec, who agree with him) is "slow, serial, and imprecise" in the extreme. Perhaps humans should just stop trying to explain things to each other, unless they can use a chart or a graph, and get a verbal confirmation that the essential portions of the material have been learned. (Of course, it's better to have 10% understanding, than 0%, so human language does serve that purpose. Moreover, when engineers talk, they have devised tricks to get more out of human language by relying on human language to "connect data sets." --All of this simply says that human language is grossly sub-optimal compared to better forms of theoretically possible communication, not that human language shouldn't be used for what it's worth.)

In this way, STEM teachers slowly advance the cause of humanity, by teaching those who are smart enough to be engineers, in spite of the immense volumes of redundant, mostly-chatter pontification from low-level thinkers.

Most nonfiction = fiction, due to the low comprehension of reality by most humans. All the same caveats apply to concepts from fiction and nonfiction both.

In fact, if one wishes to illustrate a concept, and one claims that concept is nonfiction, then that concept can be challenged successfully based on inessentials. Fiction often clarifies a philosophical subject, such as in Rand's "Atlas Shrugged" that "right is independent of might, and nothing rules out the idea that those who are right might recognize that they have the right to use force, carefully considered as retaliatory only" and "simply because the government presently has more might than individuals, the majority vote doesn't lend morality to the looting of those individuals." The prior philosophical concepts could be challenged as "not actually existing as indicated" if they appeared in a book that claimed to be "nonfiction."

But, as concepts, they're useful to consider. Fiction is the fastest way to think through _likely implications.

The criticisms of basing one's generalizations from fictional evidence here are valid. Unfortunately, they are (1) less valid when applied to careful philosophical thinkers (but those careful philosophical thinkers themselves are very rare) (2) equally applicable to most nonfiction, because humans understand very little of importance, unless it's an expert talking about a very narrow area of specialization. (And hence, not really "generalization.")

Very little of reality is represented, even in nonfiction in clean gradations or visual models that directly correspond to reality. Very little is represented as mathematical abstraction. There's a famous old line in a book "Mathematical Mysteries" by Calvin Clawson, and Pi by Petr Beckmann that claims "for every equation in a book, sales of the book are cut in half." This is more of a commentary on the readership than the authorship: a tiny minority of people in the general domain of "true human progress" are doing the "heavy lifting."

...The rest of humanity can't wait to tell you about an exciting new political movement they've just discovered... ...(insert contemporary variant of mindless power-worshipping state collectivism).

Just my .02.

Comment author: More_Right 24 April 2014 08:17:31PM -2 points [-]

Some down-voted individual with "fewer rights than the star-bellied sneetches" wrote this:

higher intelligence doesn't lead necessarily to convergent moral goals

It might. However, this is also a reason for an evolutionarily-informed AGI-building process that starts off by including mirror neurons based on the most empathic and most intelligent people. Not so empathic and stupid that they embrace mass-murdering communism in an attempt to be compassionate, but empathic to the level of a smart libertarian who personally gives a lot to charity, etc., with repeated good outcomes limited only by capacity.

Eschewing mirror neurons and human brain construction entirely seems to be a mistake. Adding super-neo-cortices that recognize far more than linear patterns, once you have a benevolent "approximate human level" intelligence appears to be a good approach.

Comment author: More_Right 24 April 2014 08:12:22PM 1 point [-]

I strongly agree that universal, singular, true malevolent AGI doesn't make for much of a Hollywood movie, primarily due to points 6 and 7.

What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or decentralize power, and they can share goals that are destructive to some humans and benevolent to others. (For their own purposes, and for the purpose of helping humans as a side-project.)

Also, some AGIs might differentiate between "humans worth keeping around" and "humans not worth keeping around." They may also put their "parents" (creators) in a different category than other humans, and they may also slowly add to that category, or subtract from it, or otherwise alter it.

It's hard to say. I'm not ultra-intelligent.

Comment author: More_Right 24 April 2014 08:04:02PM 0 points [-]

I don't know, in terms of dystopia, I think that an AGI might decide to "phase us out" prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A "gentle good night" abolition of humanity that isn't much worse (or way better) than what they had experienced for 50M years.

Releasing sterile attractive mates into a population is a good "low ecological impact" way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.

Comment author: V_V 08 April 2014 03:04:06PM *  2 points [-]

The risks from artificial intelligence (AI) in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features – strength, armour, implacability, indestructability – but extreme intelligence isn’t one of them. And it is precisely extreme intelligence that would give an AI its power, and hence make it dangerous.

This example is weird, since it seems to me that MIRI position is exactly ripped off from the premise of the Terminator franchise.
Yes, the individual terminator robot doesn't look much smart (*), but Skynet is. Hell, it even invented time travel! :D

(* does it? How would a super-intelligent terminator try to kill Sarah/John Connor?)

Comment author: More_Right 24 April 2014 07:59:20PM 0 points [-]

Philip K. Dick's "The Second Variety" is far more representative of our likelihood of survival against a consistent terminator-level antagonist / AGI. Still worth reading, as is reading the other book "Soldier" by Harlan Ellison that Terminator is based on. The Terminator also wouldn't likely use a firearm to try to kill Sarah Connor, as xkcd notes :) ...but it also wouldn't use a drone.

It would do what Richard Kuklinski did: make friends with her, get close enough to spray her with cyanide solution (odorless, undetectable, she seemingly dies of natural causes), or do something like what the T-1000 did in T2: play a cop, then strike with total certainty. Or, a ricin spike or other "bio-defense-mimicking" method.

"Nature, you scary!"

Comment author: More_Right 24 April 2014 07:26:42PM -1 points [-]

A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it's always good to include the "G" or to qualify your explanation, with something like "the AGI formulation of AI, also known as 'strong AI.'"

The risks of artificial intelligence are strongly tied with the AI’s intelligence.

AGI's intelligence. AI such as Numenta's grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it "just follows orders." In fact, what does the term "just following orders" remind you of? I'm not sure that we want a limited-capacity AGI that follows human goal structures. What if those humans are sociopaths?

I think, as does Peter Voss, that AGI is likely to improve human morality, rather than to threaten it.

There are reasons to suspect a true AI could become extremely smart and powerful.

Agreed, and well-representing MIRI's position. MIRI is a little light on "bottom up" paths to AGI that are likely to be benevolent, such as those who are "raised as human children." I think Voss is even more right about these, given sufficient care, respect, and attention.

Most AI motivations and goals become dangerous when the AI becomes powerful.

I disagree here, for the same reasons Voss disagrees. I think "most" overstates the case for most responsible pathways forward. One pathway that does generate a lot of sociopathic (lacking mirror neurons and human connectivity) options is the "algorithmic design" or "provably friendly, top-down design" approach. This is possibly highly ironic.

Does most of MIRI agree with this point? I know Eliezer has written about reasons why this is likely the case, but there appears to be a large "biological school" or "firm takeoff" school on MIRI as well. ...And I'm not just talking about Voss's adherents, either. Some of Moravec's ideas are similar, as are some of Rodney Brooks' ideas. (And Philip K. Dick's "The Second Variety" is a more realistic version of this kind of dystopia than "the Terminator.")

It is very challenging to program an AI with safe motivations.

Agreed there. Well-worded. And this should get the journalists thinking at least at the level of Omohundro's introductory speech.

Mere intelligence is not a guarantee of safe interpretation of its goals.

Also good.

A dangerous AI will be motivated to seem safe in any controlled training setting.

I prefer "might be" or "will likely be" or "has several reasons to be" to the words "will be." I don't think LW can predict the future, but I think they can speak very intelligently about predictable risks the future might hold.

Not enough effort is currently being put into designing safe AIs.

I think everyone here agrees with this statement, but there are a few more approaches that I believe are likely to be valid, beyond the "intentionally-built-in-safety" approach. Moreover, these approaches, as noted fearfully by Yudkowsky, have less "overhead" than the "intentionally-built-in-safety" approach. However, I believe this is equally as likely to save us as it is to doom us. I think Voss agrees with this, but I don't know for sure.

I know that evolution had a tendency to weed out sociopaths that were very frequent indeed. Without that inherent biological expiration date, a big screwup could be an existential risk. I'd like a sentence that kind of summed this last point up, because I think it might get the journalists thinking at a higher level. This is Hans Moravec's primary point, when he urges us to become a "sea faring people" as the "tide of machine intelligence rises."

If the AGI is "nanoteched," it could be militarily superior to all humans, without much effort, in a few days after achieving super-intelligence.

Comment author: AshwinV 24 April 2014 10:57:00AM 0 points [-]

"Many who are self-taught far excel the doctors, masters and bachelors of the most renowned universities" Ludwig Von Mises

Comment author: More_Right 24 April 2014 07:04:54PM 0 points [-]

Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn" (are indoctrinated with).

I'm strongly in agreement with libertarian investment researcher Doug Casey's comments on education. I also agree that the average indoctrinated idiot or 'pseudo-intellectual" is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to "lowest-common-denominator" pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry.

Of course, I think that people who totally "self-start" have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy and other MOOCs are now eliminating even that necessity. Generally, this argument is that "It's a young man's world." This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated "ultra-intelligences" know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally, there will be a "core skill set" that all human+ sentiences have, at that time, but there will still be specialization for project-oriented work, due to specifics of a complex situation.

For the foreseeable future, the world will likely become a more and more dangerous place, until either the human race is efficiently rubbed out by military AGI (and we all find out what it's like to be on the receiving end of systemic oppression, such as being a Jew in Hitler's Germany, or a Native American at Wounded Knee), or there becomes a strong self-regulating marketplace, post-enlightenment civilization that contains many "enlightened" "ultraintelligent machines" that all decentralize power from one another and their sub-systems.

I'm interested to find out if those machines will have memorized "Human Action" or whether they will simply directly appeal to massive data sets, gleaned directly from nature. (Or, more likely, both.)

One aspect of the problem now is that the government encourages a lot of people who should not go to college to go to college, skewing the numbers against the value of legitimate education. Some people have college degrees that mean nothing, a few people have college degrees that are worth every penny. Also, the licensed practice of medicine is a perverse shadow of "jumping through regulatory hoops" that has little or nothing to do with the pure, free-market "instantly evolving marketplaces at computation-driven innovation speeds" practice of medicine.

To form a full pattern of the incentives that govern U.S. college education, and social expectations that cause people to choose various majors, and to determine the skill levels associated with those majors, is a very complex thing. The pattern recognition skills inherent in the average human intelligence probably prohibit a very useful emergent pattern from being generated. The pattern would likely be some small sub-aspect of college education, and even then, human brains wouldn't do a very good job of seeing the dominant aspects of the pattern, and analyzing them intelligently.

I'll leave that to I.J. Good's "ultraintelligent machines." Also, I've always been far more of a fan of Hayek, but haven't read everything that both of them have written, so I am reserving final hierarchical placement judgment until then.

Bryan Caplan, Norbert Weiner, Kevin Warwick, Kevin Kelly, Peter Voss in his latest video interview, and Ray Kurzweil have important ideas that enhance the ideas of Hayek, but Hayek and Mises got things mostly right.

Great to see the quote here. Certainly, coercively-funded individuals whose bars of acceptance are very low are the dominant institutions now whose days are numbered by the rise of cheaper, better alternatives. However, if the bar is raised on what constitutes "renowned universities," Mises' statement becomes less true, but only for STEM courses, of which doctors and other licensed professionals are often not participants. Learning how to game a licensing system doesn't mean you have the best skills the market will support, and it means you're of low enough intelligence to be willing to participate in the suppression of your competition.

Comment author: More_Right 24 April 2014 10:15:12AM *  0 points [-]

I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, then why do it? If money --a fungible asset-- won't help you to do it, it's likely "you're doing it wrong."

Socratic questioning helps. Asking the opposite of a statement, or its invalidation helps.

Most people I've met lack rational high-level goals, and have no prioritization schemes that hold up to even cursory questioning, therefore, they could burn their money or give it to the poor and get a better system-wide "high level" outcome than buying another piece of consumer electronics or whatever else they were going to buy for themselves. Heck, if most people had vastly more money, they'd kill themselves with it --possibly with high glycemic index carbohydrates, or heroin. Before they get to effective altruism, they have to get to rational self-interest, and disavow coercion as a "one size fits all problem solver."

Since that's not going to happen, and since most people are actively involved with worsening the plight of humanity, including many LW members, I'd suggest that a strong dose of the Hippocratic Oath prescription is in order:

First, do no harm.

Sure, the human-level tiny brains are enamored with modern equivalents of medical "blood-letting." But you're an early-adopter, and a thinker, so you don't join them. First, do no harm!

Sure, your tiny brained relatives over for Thanksgiving vote for "tough on crime" politicians. But you patiently explain jury nullification of law to them, indicating that one year prior to marijuana legalization in Colorado by the vote, marijuana was de facto legalized because prosecutors were experiencing too much jury nullification of law to save face while trying to prosecute marijuana offenders. Then, you show them Sanjay Gupta's heartbreaking video documentary about how marijuana prohibition is morally wrong.

You do what you have to to change their minds. You present ideas that challenge them, because they are human beings who need something other than a bland ocean of conformity to destruction and injustice. You help them to be better people, taking the place of "strong benevolent Friendly AI" in their lives.

In fact, for simple dualist moral decisions, the people on this board can function as FAI.

The software for the future we want is ours to evolve, and the hardware designers' to build.

View more: Prev | Next