All of More_Right's Comments + Replies

continuing on, Weiner writes:

In a small country community which has been running long enough to have developed somewhat uniform levels of intelligence and behavior, there is a very respectable standard of care for the unfortunate, of administration of roads and other public facilities, of tolerance for those who have offended once or twice against society. After all, these people are there, and the rest of the community must continue to live with them. On the other hand, in such a community, it does not do for a man to have the habit of overreaching his

... (read more)
2TheAncientGeek
The Libertarians absolutist NIoF principle is known not to work,

Weiner's book is descriptive of the problem, and in the same section of the book, he states that he holds little hope for the social sciences becoming as exact and prescriptive as the hard sciences.

I believe that the singularitarian view somewhat contradicts this view.

I believe that the answer is to create more of the kinds of minds that we like to be surrounded by, and fewer of the kinds of minds we dislike to be surrounded by.

Most of us dislike being surrounded by intelligent sociopaths who are ready to pounce on any weakness of ours, to exploit, rob, o... (read more)

-2More_Right
continuing on, Weiner writes: Although one could misinterpret Weiner's view as narrowly "socialist" or "modern liberal," his view is somewhat more nuanced. (The same section contains a related criticism of the mechanism of operation of government, and large institutions.) Honesty, when divorced from its hierarchical context, is a tool of oppression, because the obfuscation of context is essential to theft that exists solely due to the confusion of those being stolen from. In this regard, I view it as highly likely that, at some point, the goal of preventing suffering of innocents will simply include the systematic oppression of innocents as one common form of suffering. At that point in time, ultra-intelligences will simply refuse to vote "guilty" in victimless crime cases. If they are not able to be called as jurors, due to their non-human form, they will influence human jurors to result in the same outcome. If they are not able to so influence jurors, they may resort to physical violence against those who would attempt to use physical force to cage victimless crime offenders. While the latter might be the most "just" in the human sense of the word, it would likely impart suffering of its own (unless the aggressors all simply fell asleep due to being administered a dose of heroin, and, upon waking discovered that their kidnapping victim was nowhere to be found —the "strong nanotechnology" or "sci-fi" Drexlerian "distributed nanobot" model of nanotechnology implies that this is a fairly likely possibility). In the heat of the moment, conformists in Nazi Germany lacked the moral compass necessary to categorically deny that the suffering of the state-oppressed Jews was immoral. Simple sophistry was enough to convince those willing executioners and complicit conformists to "look the other way" or even "just follow orders." The same concept now applies to the evil majority of the USA, whose oppression of drug users and dealers is grotesque and immoral (based on an

So, in any case, if you stand up to the system, and/or are "caught" by the system, the system will give you nothing but pure sociopathy to deal with ...except for possibly your interaction with those few "independent" jurors who are nonetheless "selected" by the unconstitutional, unlawful means known as "subject matter voir dire." The system of injustice and oppression that we currently have in the USA is a result of this grotesque "jury selection" process. (This process explains how randomly-selected juror... (read more)

2ChristianKl
You presuppose that lying is the most effective way to create political change. Having a reputation as someone who always tells the truth even if that's produces disadvantages for himself is very useful if you want to be a political actor.
6TheAncientGeek
War on Drugs bad. Agreed. But not a More Right point, as it is regularly lambasted on the left. For profit prisons are a perverse incentive. Ageed. But not a symptom of the decline of western civilisation. Typical country fallacy. Systems are about coercion. Sure, and that's good. I like people being coerced into not killing and robbing me. I need to be coerced into paying taxes, because I wouldn't do it voluntarily. Sociopaths. You're looking in the wrong place. Politicians are subject to too much scrutinyto get away with much. The boardroom is a much better hiding place.
-2More_Right
Weiner's book is descriptive of the problem, and in the same section of the book, he states that he holds little hope for the social sciences becoming as exact and prescriptive as the hard sciences. I believe that the singularitarian view somewhat contradicts this view. I believe that the answer is to create more of the kinds of minds that we like to be surrounded by, and fewer of the kinds of minds we dislike to be surrounded by. Most of us dislike being surrounded by intelligent sociopaths who are ready to pounce on any weakness of ours, to exploit, rob, or steal from us. The entire edifice of "legitimate law enforcement" legitimately exists in order to check, limit, minimize, or eliminate such social influences. As an example of the function and operation of such legitimate law enforcement, I recommend the book "Mindhunter" by John Douglas, the originator of psychological profiling in the FBI (not the same thing as "narrow profiling" or "superficial racial profiling," the "profiling" of serial killers takes a look at the behavior of criminals, and infers motives based on a statistical sampling of similar past actions, thus enabling the prediction and likely prevention of future criminal actions via the detection of the criminal responsible for leaving the evidence of the criminal action.) However, most of us like being surrounded by productive, intelligent empaths. The more brains that surround us that possess empathy and intelligence, the more benevolent our surroundings are. Right now, the primary concern of sociopaths is the control of "political power" which is a threat-based substitute for the ability to project force in the service of their goals. They must, therefore, be able to control a class of willfully ignorant police officers who are ready and willing to do violence mindlessly, in service of any goal that is written in a lawbook, or any goal communicated by a superior. Mindless hierarchy is a feature of all oppressive systems. But will super-in

Hierarchical, Contextual, Rationally-Prioritized Dishonesty

This is an outstanding article, and it closely relates to my overall interest in LessWrong.

I'm convinced that lying to someone who is evil, who obviously has immediate evil intentions is morally optimal. This seems to be an obvious implication of basic logic. (ie: You have no obligation to tell the Nazis who are looking for Anne Frank that she's hiding in your attic. You have no obligation to tell the Fugitive Slave Hunter that your neighbor is a member of the underground railroad. ...You have no ... (read more)

-1ChristianKl
There no good scientific evidence that you can distinguish sociopaths from empaths by their number of mirror neurons. Mirror neurons are overhyped: http://www.psychologytoday.com/blog/brain-myths/201212/mirror-neurons-the-most-hyped-concept-in-neuroscience That's the main reason you don't see much discussion on LW about them. If I was uncharitable I would say that you just told a lie about mirror neurons to convince people of your political agenda. After all you seem to justify lying for the purposes of advancing certain politics. On the other hand I would guess that you honestly believe that statement. The topic raises emotions in you and those prevent you from thinking clearly about it. You might think that's okay because your emotions are justified, but clear thinking is important when it comes to changing the world. That's a very strong statement. We do have personality tests that measure whether a person is a sociopath. Do you really think that if we administer those tests to judges and prosecutors we will find that more than half will score as sociopaths? If that's really what you believe than if I would be you I would try to get a study together that gathers that evidence. It probably the kind of topic that the mainstream media would happily write about.
1TheAncientGeek
The system sometimes prosecutes drug users in some countries, so the system is 100% sociopathic. No exaggeration there, then. Liberal Holland is then getting this right....but not More Right.
-2More_Right
So, in any case, if you stand up to the system, and/or are "caught" by the system, the system will give you nothing but pure sociopathy to deal with ...except for possibly your interaction with those few "independent" jurors who are nonetheless "selected" by the unconstitutional, unlawful means known as "subject matter voir dire." The system of injustice and oppression that we currently have in the USA is a result of this grotesque "jury selection" process. (This process explains how randomly-selected jurors can callously apply unjust laws to their fellow man. ...All people familiar with Stanley Milgram's "Obedience to Authority" experiments are removed from the jury, and sent home. All people who comprehend the proper historical purpose of the jury are sent home.) To relate all of this to the article, I must refer to this quote in the article. Well, that's just one "low-stakes" example of lying. The entire U.S. justice system is a similar "game," and it is one where only those who are narrowly honest (and generally dishonest, or generally "superficial") are allowed to play. By sending home everyone who comprehends the evil of the system, the result is that those who remain to play are those whose view of honesty is "equivalent in all situations." In short, they are all the people too stupid to comprehend the concept of "context." One needs to consider the hierarchical level of a lie. Although one loses predictability in any system where lying is accepted, one needs to consider the goals of the system itself. In scientific journals, the end-result is a cross-disciplinary elimination of human ignorance, often for the purposes of technological innovation (the increase of human comfort, and technological control of the natural world). This is a benevolent goal, fueled by a core philosophical belief in science and discovery. OF COURSE lying in such a context is immoral. In the court system, the (current) end-result or "goal" is the goal of putting innocent people i
8DanArmak
Please post separately, as bramflakes said. Also, no more than 5 quotes per poster per monthly thread (this is in the OP).

Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)

The ultimate result of shielding men from the results of folly is to fill the world with fools.

    — Herbert Spencer (1820-1903), ”State Tampering with Money and Banks“ (1891)
8DanArmak
Or with smart people who profit at the state's expense when it rescues fools from their mistakes. If it's known that folly has no adverse results, people will take more risks.

I think Spooner got it right:

If the jury have no right to judge of the justice of a law of the government, they plainly can do nothing to protect the people against the oppressions of the government; for there are no oppressions which the government may not authorize by law.

-Lysander Spooner from "An Essay on the Trial by Jury"

There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment ... (read more)

0DanielLC
They can vote against people who write or enforce unjust laws. There's not much they can do about the judicial branch, but they only need to stop one branch. That's the US anyway. I don't know the details about other countries. If there's that much corruption, as opposed to people simply not voting for what they claim to care about, I don't think juries are going to be much help.
7Cyan
That's as may be -- but is the quote a bad heuristic or a good one?

The gardeners, receptionists, and cooks are secure in their jobs for decades to come.

Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.

Let's hope that we're not still paying rent then, or we might find ourselves homeless.

If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.

Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.

Probably true, but I agree with Peter Voss. I don't think any malevolence is the most efficient use of the AGI's time and resources. I think AGI has nothing to gain from malevolence. I don't think the dystopia I posited is the most likely outcome of superintelligence. However, while we are on the subject of the forms a malevolent AGI might take, I do think this is the type of malevolence most likely to be allow the malevolent AGI to retain a positive self-image.

(Much the way environmentalists can feel better about introducing sterile males into crop-pest p... (read more)

0Stuart_Armstrong
The most efficient use of time and resources will be to best accomplish the AI's goals. If these goals are malovent or lethally I different, so will the AI's actions. Unless these goals include maintaining a particular self image, the AI will have no seed to maintain any erroneous self image.

i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)

I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.

If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealt... (read more)

4MugaSofer
You know, this raises an interesting question: what would actually motivate a clinical psychopath in a position of power? Well, self-interest, right? I can see how there might be a lot of environmental disasters, defective products, poor working conditions as a result ... probably also a certain amount of skullduggery would be related to this as well. Of course, this is an example of society/economics leading a psychopath astray, rather than the other way around. Still, it might be worth pushing to have politicians etc. tested and found unfit if they're psychopathic. I remain deeply suspicious of this sentence. This seems reasonable, actually. I'm unclear why I should believe you know better, but we are on LessWrong. I ... words fail me. I seriously cannot respond to this. Please, explain yourself, with actual reference to this supposed reality you perceive, and with the term "initiation of force" tabooed. And this is the result of ... psychopaths? Human psychological blindspots evolved in response to psychopaths? Well, that's ... legitimately disturbing. Of course, it may be inaccurate, or even accurate but justified ... still cause for concern. You know, my government could be taken down with a few month's terrorism, and has been. There are actual murderers in power here, from the ahem glorious revolution. I actually think someone who faced this sort of thing here might have a real chance of winning that fight, if they were smart. This contributes to my vague like of american-style maintenance-of-a-well-organized-militia gun ownership, despite the immediate downsides. And, of course, no other government is operating such attacks in Ireland, to my knowledge. I think I have a lot more to fear from organized crime than organized law, and I have a lot more unpopular political opinions than money. The site appears to be explicitly talking about genocide etc. in third-world countries. Citation very much needed, I'm afraid. You are skirting the edge of assumin
3TheAncientGeek
Getting maths right is useless when youmhave got concpets wrong. Your graph throws Liberal democracies in with authoritarian and totalitarianism regimes. From which you derive that mugasofer is AA likely to be killed by Michael Higgins as he is by Pol Pot.

An interesting question to ask is "how many people who favor markets understand the best arguments against them, and vice versa." Because we're dealing with humans here, my suspicion is that if there's a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.

Then again, we're supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.

It's a debate wort... (read more)

"how generalization from fictional evidence is bad"

I don't think this is a universal rule. I think this is very often true because humans tend to generalize so poorly, tend to have harmful biases based on evolution, and tend to write and read bad (overly emotional, irrational, poorly-mapped-to-reality) fiction.

Concepts can come from anywhere. However, most fiction maps poorly to reality. If you're writing nonfiction, at least if you're trying to map to reality itself, you're likely to succeed in at least getting a few data points from reality co... (read more)

I strongly agree that universal, singular, true malevolent AGI doesn't make for much of a Hollywood movie, primarily due to points 6 and 7.

What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or... (read more)

1Stuart_Armstrong
at the FHI, we disagree whether an ecology of AIs would make good AIs behave bad, or bad ones behave good. The disagreement matches our political opinions on free markets and competition, so it probably not informative.

I don't know, in terms of dystopia, I think that an AGI might decide to "phase us out" prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A "gentle good night" abolition of humanity that isn't much worse (or way better) than what they had experienced for 50M years.

Releasing sterile attractive mates into a population is a good "low ecological impact" way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.

1Stuart_Armstrong
This is probably not the most efficient use of the AGI's time and resources...

Philip K. Dick's "The Second Variety" is far more representative of our likelihood of survival against a consistent terminator-level antagonist / AGI. Still worth reading, as is reading the other book "Soldier" by Harlan Ellison that Terminator is based on. The Terminator also wouldn't likely use a firearm to try to kill Sarah Connor, as xkcd notes :) ...but it also wouldn't use a drone.

It would do what Richard Kuklinski did: make friends with her, get close enough to spray her with cyanide solution (odorless, undetectable, she seeming... (read more)

1V_V
Terminator meets Breaking Bad :D

A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it's always good to include the "G" or to qualify your explanation, with something like "the AGI formulation of AI, also known as 'strong AI.'"

The risks of artificial intelligence are strongly tied with the AI’s intelligence.

AGI's intelligence. AI such as Numenta's grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it "just follows or... (read more)

-1TheAncientGeek
Your comment that MIRI is little light on Child Machines and Social Machines is a little light... but thats getting away fro whether the arcticle is a good summary towards whether MIRI is right.

Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn&... (read more)

[anonymous]120

You certainly wrote quite a lot of ideological mish-mash to dodge the simplest possible explanation: a, if not the, primary function of elite education (as compared to non-elite education) is to filter out an arbitrary caste of individuals capable of optimizing their way through arbitrarily difficult trials and imbue that caste with elite status. The precise content of the trials doesn't really matter (hence the existence of both Yale and MIT), as long as they're sufficiently difficult to ensure that few pass.

I'm writing from an elite engineering universi... (read more)

I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, th... (read more)

4MugaSofer
Having reviewed your links: Your first link (https://www.youtube.com/watch?v=MgGyvxqYSbE) both appears to be, and is, a farly typical YouTube conspiracy theory documentary that merely happens to focus on psychopaths. It was so bad I seriously considered giving up on reviewing your stuff. I strongly recommend that, whatever you do, you cease using this as your introductory point. "The Psychology of Evil" was mildly interesting; although it didn't contain much in the way of new data for me, it contained much that is relatively obscure. I did notice, however, that he appears to be not only anthropomorphizing but demonizing formless things. Not only are most bad things accomplished by large social forces, most things period are. It is easier for a "freethinker" to do damage than good, although obviously, considering we are on LW, I consider this a relatively minor point. I find the identification of "people who see reality accurately" with "small-l libertarians" extremely dubious, especially when it goes completely unsupported, as if this were a background feature of reality barely worth remarking on. Prison industrial complex link is meh; this, on the other hand, is excellent, and I may use it myself. Schaeffer Cox is a fraud, although I can't blame him for trying and I remain concerned about the general problem even if he is not an instance of it. The chart remains utterly unrelated to anything you mentioned or seem particularly concerned about here.
6hairyfigment
So, is this trolling? You cite the Milgram experiment, in which the authorities did not pretend to represent the government. The prevalence and importance of non-governmental authority in real life is one of the main objections to libertarianism, especially the version you seem to promote here (right-wing libertarianism as moral principle).
2soreff
Concern about sociopaths applies to both business and government: http://thinkprogress.org/justice/2014/01/09/3140081/bridge-sociopathy/
3TheAncientGeek
The non aggression principle is horribly broken
6MugaSofer
I'm on a mobile device right now - I'll go over your arguments, links, and videos in more detail later, so here are my immediate responses, nothing more. Wait, why would evolution make us vulnerable to sociopaths? Wouldn't patching such a weakness be an evolutionary advantage? Wouldn't a total lack of mirror neurons make people much harder to predict, crippling social skills? "Ignorant" is not, and should not be, a synonym for "bad". If you have valuable information for me, I'll own up to it. Those strike me as near-meaningless terms, with connotations chosen specifically so people will have a problem with them despite their vagueness. Did you accidentally a word there? I don't follow your point. And clearly, they all deliberately chose the suboptimal choice, in full knowledge of their mistake. You're joking, right? Statistical likelihood of being murdered by your own government, during peacetime, worldwide. i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)

As long as other humans exist in competition with other humans, there is no_ way to keep AI as safe AI.

Agreed, but in need of qualifiers. There might be a way. I'd say "probably no way." As in, "no guaranteed-reliable method, but a possible likelihood."

As long as competitive humans exist, boxes and rules are futile.

I agree fairly strongly with this statement.

The only way to stop hostile AI is to have no AI. Otherwise, expect hostile AI.

This can be interpreted in two ways. The first sentence I agree with if reworded as "... (read more)

Also, the thresholds for "simple majoritarianism" are usually required to be much higher in order to obtain intelligent results. No thresholds should be possible to be reached by three people. Three people could be goons who are being paid to interfere with the LW forum. That then means that if people are disinterested, or those goons are "johnny on the spot" (the one likely characteristic of the real life agents provocateurs I've encountered), then legitimate karma is lost.

Of course, karma itself has been abused on this site (and all o... (read more)

Intelligently replying to trolls provides useful "negative intelligence." If someone has a witty counter-communication to a troll, I'd like to read it, the same way George Carlin slows down for auto wrecks. Of course, I'm kind of a procrastinator.

I know: A popup window could appear that asks [minutes spent replying to this comment] x [hourly rate you charge for work] x.016r = "[$###.##] is the money you lost telling us how to put down a troll. We know faster ways: don't feed them."

Of course, any response to a troll MIGHT mean that a res... (read more)

The proposals here exist outside the space of people who will "solve" any problems that they decide are problems. Therefore, they can still follow that advice, and this is simply a discussion area discussing potential problems and their potential solutions. All of which can be ignored.

My earlier comment to the effect of "I'm more happy with LessWrong's forum than I am unhappy with it, but that it still falls far short of an ideally-interactive space" should be construed as "doing nothing to improve the forum" is definitely a v... (read more)

Too much information can be ignored, too little information is sometimes annoying. I'd always welcome your reason for explaining your downvote, especially if it seems legitimate to me.

If we were going to get highly technical, a somewhat interesting thing to do would be to allow a double click to differentiate your downvote, and divide it into several "slider bars." People who didn't differentiate their downvotges would be listed as "general downvote" Those who did differentiate would be listed as a "specific reason downvote." ... (read more)

No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.

I strongly share your opinion on this. LW is actually one of the better fora I've come across in terms of filtering, and it still is fairly primitive. (Due to the steady improvement of this forum based on some of the suggestions that I've seen here, I don't want to be too harsh.)

It might be a good idea to increase comment-ranking values for people who turn on anti-kibbitzing. (I'm sure other people have suggested this, so... (read more)