Weiner's book is descriptive of the problem, and in the same section of the book, he states that he holds little hope for the social sciences becoming as exact and prescriptive as the hard sciences.
I believe that the singularitarian view somewhat contradicts this view.
I believe that the answer is to create more of the kinds of minds that we like to be surrounded by, and fewer of the kinds of minds we dislike to be surrounded by.
Most of us dislike being surrounded by intelligent sociopaths who are ready to pounce on any weakness of ours, to exploit, rob, o...
So, in any case, if you stand up to the system, and/or are "caught" by the system, the system will give you nothing but pure sociopathy to deal with ...except for possibly your interaction with those few "independent" jurors who are nonetheless "selected" by the unconstitutional, unlawful means known as "subject matter voir dire." The system of injustice and oppression that we currently have in the USA is a result of this grotesque "jury selection" process. (This process explains how randomly-selected juror...
Hierarchical, Contextual, Rationally-Prioritized Dishonesty
This is an outstanding article, and it closely relates to my overall interest in LessWrong.
I'm convinced that lying to someone who is evil, who obviously has immediate evil intentions is morally optimal. This seems to be an obvious implication of basic logic. (ie: You have no obligation to tell the Nazis who are looking for Anne Frank that she's hiding in your attic. You have no obligation to tell the Fugitive Slave Hunter that your neighbor is a member of the underground railroad. ...You have no ...
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
The ultimate result of shielding men from the results of folly is to fill the world with fools.
— Herbert Spencer (1820-1903), ”State Tampering with Money and Banks“ (1891)
I think Spooner got it right:
If the jury have no right to judge of the justice of a law of the government, they plainly can do nothing to protect the people against the oppressions of the government; for there are no oppressions which the government may not authorize by law.
-Lysander Spooner from "An Essay on the Trial by Jury"
There is legitimate law, but not once law is licensed, and the system has been recursively destroyed by sociopaths, as our current system of law has been. At such a point in time, perverse incentives and the punishment ...
The gardeners, receptionists, and cooks are secure in their jobs for decades to come.
Except that in exponentially-increasing computation-technology-driven timelines, decades are compressed into minutes after the knee of the exponential. The extra time a good cook has, isn't long.
Let's hope that we're not still paying rent then, or we might find ourselves homeless.
If you're right (and you may well be), then I view that as a sad commentary on the state of human education, and I view tech-assisted self-education as a way of optimizing that inherently wasteful "hazing" system you describe. I think it's likely that what you say is true for some high percentage of classes, but untrue for a very small minority of highly-valuable classes.
Also, the university atmosphere is good for social networking, which is one of the primary values of going to MIT or Yale.
Probably true, but I agree with Peter Voss. I don't think any malevolence is the most efficient use of the AGI's time and resources. I think AGI has nothing to gain from malevolence. I don't think the dystopia I posited is the most likely outcome of superintelligence. However, while we are on the subject of the forms a malevolent AGI might take, I do think this is the type of malevolence most likely to be allow the malevolent AGI to retain a positive self-image.
(Much the way environmentalists can feel better about introducing sterile males into crop-pest p...
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn't give so much of our wealt...
An interesting question to ask is "how many people who favor markets understand the best arguments against them, and vice versa." Because we're dealing with humans here, my suspicion is that if there's a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.
Then again, we're supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.
It's a debate wort...
"how generalization from fictional evidence is bad"
I don't think this is a universal rule. I think this is very often true because humans tend to generalize so poorly, tend to have harmful biases based on evolution, and tend to write and read bad (overly emotional, irrational, poorly-mapped-to-reality) fiction.
Concepts can come from anywhere. However, most fiction maps poorly to reality. If you're writing nonfiction, at least if you're trying to map to reality itself, you're likely to succeed in at least getting a few data points from reality co...
I strongly agree that universal, singular, true malevolent AGI doesn't make for much of a Hollywood movie, primarily due to points 6 and 7.
What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or...
I don't know, in terms of dystopia, I think that an AGI might decide to "phase us out" prior to the singularity, if it was really malevolent. Make a bunch of attractive but sterile women robots, and a bunch of attractive but sterile male robots. Keep people busy with sex until they die of old age. A "gentle good night" abolition of humanity that isn't much worse (or way better) than what they had experienced for 50M years.
Releasing sterile attractive mates into a population is a good "low ecological impact" way of decreasing a population. Although, why would a superintelligence be opposed to _all humans? I find this somewhat unlikely, given a self-improving design.
Philip K. Dick's "The Second Variety" is far more representative of our likelihood of survival against a consistent terminator-level antagonist / AGI. Still worth reading, as is reading the other book "Soldier" by Harlan Ellison that Terminator is based on. The Terminator also wouldn't likely use a firearm to try to kill Sarah Connor, as xkcd notes :) ...but it also wouldn't use a drone.
It would do what Richard Kuklinski did: make friends with her, get close enough to spray her with cyanide solution (odorless, undetectable, she seeming...
A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it's always good to include the "G" or to qualify your explanation, with something like "the AGI formulation of AI, also known as 'strong AI.'"
The risks of artificial intelligence are strongly tied with the AI’s intelligence.
AGI's intelligence. AI such as Numenta's grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it "just follows or...
Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it's still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they "learn&...
You certainly wrote quite a lot of ideological mish-mash to dodge the simplest possible explanation: a, if not the, primary function of elite education (as compared to non-elite education) is to filter out an arbitrary caste of individuals capable of optimizing their way through arbitrarily difficult trials and imbue that caste with elite status. The precise content of the trials doesn't really matter (hence the existence of both Yale and MIT), as long as they're sufficiently difficult to ensure that few pass.
I'm writing from an elite engineering universi...
I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, th...
As long as other humans exist in competition with other humans, there is no_ way to keep AI as safe AI.
Agreed, but in need of qualifiers. There might be a way. I'd say "probably no way." As in, "no guaranteed-reliable method, but a possible likelihood."
As long as competitive humans exist, boxes and rules are futile.
I agree fairly strongly with this statement.
The only way to stop hostile AI is to have no AI. Otherwise, expect hostile AI.
This can be interpreted in two ways. The first sentence I agree with if reworded as "...
Also, the thresholds for "simple majoritarianism" are usually required to be much higher in order to obtain intelligent results. No thresholds should be possible to be reached by three people. Three people could be goons who are being paid to interfere with the LW forum. That then means that if people are disinterested, or those goons are "johnny on the spot" (the one likely characteristic of the real life agents provocateurs I've encountered), then legitimate karma is lost.
Of course, karma itself has been abused on this site (and all o...
Intelligently replying to trolls provides useful "negative intelligence." If someone has a witty counter-communication to a troll, I'd like to read it, the same way George Carlin slows down for auto wrecks. Of course, I'm kind of a procrastinator.
I know: A popup window could appear that asks [minutes spent replying to this comment] x [hourly rate you charge for work] x.016r = "[$###.##] is the money you lost telling us how to put down a troll. We know faster ways: don't feed them."
Of course, any response to a troll MIGHT mean that a res...
The proposals here exist outside the space of people who will "solve" any problems that they decide are problems. Therefore, they can still follow that advice, and this is simply a discussion area discussing potential problems and their potential solutions. All of which can be ignored.
My earlier comment to the effect of "I'm more happy with LessWrong's forum than I am unhappy with it, but that it still falls far short of an ideally-interactive space" should be construed as "doing nothing to improve the forum" is definitely a v...
Too much information can be ignored, too little information is sometimes annoying. I'd always welcome your reason for explaining your downvote, especially if it seems legitimate to me.
If we were going to get highly technical, a somewhat interesting thing to do would be to allow a double click to differentiate your downvote, and divide it into several "slider bars." People who didn't differentiate their downvotges would be listed as "general downvote" Those who did differentiate would be listed as a "specific reason downvote." ...
No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.
I strongly share your opinion on this. LW is actually one of the better fora I've come across in terms of filtering, and it still is fairly primitive. (Due to the steady improvement of this forum based on some of the suggestions that I've seen here, I don't want to be too harsh.)
It might be a good idea to increase comment-ranking values for people who turn on anti-kibbitzing. (I'm sure other people have suggested this, so...
continuing on, Weiner writes:
... (read more)