Neh. Eliezer, I'm kind of disappointed by how you write the tragic ending ("saving" humans) as if it's the happy one, and the happy ending (civilization melting pot) as if it's the tragic one. I'm not sure what to make of that.
Do you really, actually believe that, in this fictional scenario, the human race is better off sacrificing a part of itself in order to avoid blending with the super-happies?
It just blows my mind that you can write an intriguing story like this, and yet draw that kind of conclusion.
Agreed. I was very surprised that Mr. Yudkowsky went with the very ending I, myself, thought would be the "traditional" and irrational ending - where suffering and death are allowed to go on, and even caused, because... um... because humans are special, and pain is good because it's part of our identity!
Yes, and the appendix is useful because it's part of our body.
Excellent. I was reluctant to start reading at first, but when I did, I found it entertaining. This should be a TV series. :)
Eliezer: This post is an example of how all your goals and everything you're doing is affected by your existing preferences and biases.
For some reason, you see Peer's existence as described by Greg Egan as horrible. You propose an insight-driven alternative, but this seems no more convincing to me than Peer's leg carving. I think Peer's existence is totally acceptable, and might even be delightful. If Peer wires himself to get ultimate satisfaction from leg carving, then by definition, he is getting ultimate satisfaction from leg carving. There's nothing w...
Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impa...
I stumbled over the same quote. What "gift"? From whom? What "responsibility"? And just how is being "lucky" at odds with being "superior"?
To see the nonsense, let me paraphrase:
"Because giftedness is not to be talked about, no one tells human children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior animals, but lucky ones. That the gift brings with it obligations to other animals on Earth to be worthy of it."
The few people who honestly believe that are called a lunatic fringe. And yet, it is the same statement as Murray's, merely in a wider context.
What Kevin Dick said.
The benefit to each player from mutual cooperation in a majority of the rounds is much more than the benefit from mutual defection in all rounds. Therefore it makes sense for both players to invest at the beginning, and cooperate, in order to establish each other's trustworthiness.
Tit-for-tat seems like it might be a good strategy in the very early rounds, but as the game goes on, the best reaction to defection might become two defections in response, and in the last rounds, when the other party defects, the best response might be all defections until the end.
An excellent way to pose the problem.
Obviously, if you know that the other party cares nothing about your outcome, then you know that they're more likely to defect.
And if you know that the other party knows that you care nothing about their outcome, then it's even more likely that they'll defect.
Since the way you posed the problem precludes an iteration of this dilemma, it follows that we must defect.
Eliezer: what I proposed is not a superintelligence, it's a tool. Intelligence is composed of multiple factors, and what I'm proposing is stripping away the active, dynamic, live factor - the factor that has any motivations at all - and leaving just the computational part; that is, leaving the part which can navigate vast networks of data and help the user make sense of them and come to conclusions that he would not be able to on his own. Effectively, what I'm proposing is an intelligence tool that can be used as a supplement by the brains of its users.
How...
Looks like the soldier quote is gonna be big in comments. I think it's out of place too, and as opposed to most other quotes that Eliezer comes up with, it doesn't make a lot of sense. In the same way as: "It is the scalpel, not the surgeon, or the nurse, that fixed your wounds!"
Soldiers are tools wielded by the structure in power, and it is the structure in power that determines whether the soliders are going to protect your rights and take them away.
Perhaps, "The One" might argue, it is a different kind of person who becomes a soldier...
Kaj makes the efficiency argument in favor of full-fledged AI, but what good is efficiency when you have fully surrendered your power?
What good is being the president of a corporation any more, when you've just pressed a button that makes a full-fledged AI run it?
Forget any leadership role in a situation where an AI comes to life. Except in the case that it is completely uninterested in us and manages to depart into outer space without totally destroying us in the process.
Why build an AI at all?
That is, why build a self-optimizing process?
Why not build a process that accumulates data and helps us find relationships and answers that we would not have found ourselves? And if we want to use that same process to improve it, why not let us do that ourselves?
Why be locked out of the optimization loop, and then inevitably become subjects of a God, when we can make ourselves a critical component in that loop, and thus 'be' gods?
I find it perplexing why anyone would ever want to build an automatic self-optimizing AI and switch it to...
My earlier comment is not to imply that I think "maximization of human happiness" is the most preferred goal.
An easily obvious one, yes. But faulty; "human" is a severely underspecified term.
In fact, I think that putting in place a One True Global Goal would require ultimate knowledge about the nature of being, to which we do not have access currently.
Possibly, the best we can do is come up with plausible global goal that suits us for medium run, while we try to find out more.
That is, after all, what we have always done as human beings.
Eliezer: You have perhaps already considered this, but I think it would be helpful to learn some lessons from E-Prime when discussing this topic. E-Prime is a subset of English that bans most varieties of the verb "to be".
I find sentences like "murder is wrong" particularly underspecified and confusing. Just what, exactly, is meant by "is", and "wrong"? It seems like agreeing on a definition for "murder" is the easy part.
It seems the ultimate confusion here is that we are talking about instrumental values (...
frelkins: Should I apologize, then, for not yet having developed sufficient wit to provide pleasure with style to those readers who are not pleased by the thought?
Cynicism is warranted to the extent that it leads to a realistic assessment and a predictive model of the world.
Cynicism is exaggerated when it produces an unrealistic, usually too pessimistic, model of the world.
But to the extent that cynicism is a negative evaluation of "what is", I am not being a cynic in this topic.
I am not saying, bitterly, how sad it is that most people are really...
Phillip Huggan - let me just say that I think you are an arrogant creature that does much less good to the world than he thinks. The morality you so highly praise only appears to provide you with a reason to smugly think of yourself as "higher developed" than others. Its benefit to you, and its selfish motivation, is plainly clear.
Phillip Huggan: "Denis, are you claiming there is no way to commit acts that make others happy?"
Why the obsession with making other people happy?
Phillip Huggan: "Or are you claiming such an act is always out of self-interest?"
Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.
In general, I wouldn't say self-interest. It is not in your self interest to cut off your penis and eat it, for example. But some people desire it and act on it.
Desire. Not necessarily logical. Does n...
Thanks for the link to The People's Romance!
Disagreeing with Mr. Huggan, I'd say Obert is the one without a clue.
Obert seems to be trying to find some external justification for his wants, as if it's not sufficient that they are his wants; or as if his wants depend on there being an external justification, and his mental world would collapse if he were to acknowledge that there isn't an external justification.
I would compare morality to patriotism in the sense of the Onion article that Robin Hanson recently linked to. Much like patriotism, morality is something adopted by people who like to believe ...
Unknown: "For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?"
What sort of stupid question is this? :-) But of course! If I gave you a billion dollars, would it make any difference to your behavior? :-)
mtraven: "Psychopathy is not "not believing in morality": it entails certain kinds of behaviors, which naive analyses of attribute to "lack of morality", but which I would argue are a result of aberrant preferences that manifest as aberrant behavior and can be explained without recourse to the concept of morality."
Exactly. Logically, I can agree entirely with Marquis de Sade, and yet when reading Juliette, my stomach turns around about page 300, and I just can't read any more about the raping and the burning and the torture.
It...
Not having read the other comments, I'd say Eliezer is being tedious.
I'd do whatever the hell I want, which is what I am already doing.
Interesting stuff about the preservation of phase space volume, though. I appreciate it, I previously knew nothing about that.
Reading today's fare is a bit like eating unflavored oatmeal. :-)
It seems to me that the person who can read this and understand it, already knows it.
But the person who does not know it, cannot understand it and will be frustrated by reading it.
I'm not sure what your intention is with the whole series of posts, but if you'd like to enligthen the muggles, the trick is to explain it in a concise, striking, unusual, easily understood, entertaining manner.
Of course, that takes genius. :-)
But otherwise you are writing primarily for people who already know it.
In yet other words: some of your posts, I will forward to my wife. Others, I won't. This one is one of the latter.
I should however note that one of the last mathy posts (Mutual Information) struck a chord with me and caused an "Aha!" moment for which I am grateful.
Specifically, it was this:
I digress here to remark that the symmetry of the expression for the mutual information shows that Y must tell us as much about Z, on average, as Z tells us about Y. I leave it as an exercise to the reader to reconcile this with anything they were taught in logic class about how, if all ravens are black, being allowed to reason Raven(x)->Black(x) doesn't mean you're al...
I think you should go with the advice and post something fun. Especially so if you have "much important material" to cover in following months. No need for a big hurry to lose readers. ;)
Eliezer - the way question #1 is phrased, it is basically a choice between the following:
Be perceived as a hero, with certainty.
Be perceived as a hero with 90% probability, and continue not to be noticed with 10% probability.
This choice will be easy for most people. The expected 50 extra deaths are a reasonable sacrifice for the certainty of being perceived as a hero.
The way question #2 is phrased, it is similarly a choice between the following:
Be perceived as a villain, with certainty.
Not be noticed with 90% probability, and be perceived as a v
Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B.
1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly.
In option 1A, verification consists of checking your bank account and seeing that you gained $24,000. Straightforward and simple. Hardly any risk of being deceived.
For all your talk about The One, I'm going to start to call you Morpheus.
Eliezer - who is this "the one" you keep talking about? Do you mean Neo? ;)
Joseph - well, people like you aren't the ones who need to be accompanied to the stadium by the police.
I agree with Eliezer that it seems to be the in-group/out-group dynamic that drives the popularity of sports. The popularity in turn drive the ads, the ads provide a revenue opportunity, and the revenue opportunity drives the high salaries of popular players.
The dynamic seems ridiculous to those of us who find the in-group/out-group dynamic silly. Then again, those of us who find that silly, and so do not contribute to the salaries of football players, still support the high salaries for superstars in other roles. Jerry Seinfeld and Ray Romano probably mad...
Eli: great posts, but you are continuously abusing "the one", "the one", "the one". That's not how the word "one" is used in the way you are trying to use it. Proper usage is "one", without "the".
Furthermore, when the pronoun needs to be repeated, the nicer and more traditional usage is "one ... one's ... to one", and not "one ... their ... to them".
See Usage Note here.
In the Verizon case, George can apply the modesty argument and still come up with the conclusion that he is almost certainly right.
He needs to take into account two things: (1) what other people besides Verizon think about the distinction between .002 dollars and .002 cents, and (2) what is the likelihood that Verizon would admit the mistake even if they know there is one.
Admitting the mistake and refunding one customer might as well have the consequence of having to refund tens of thousands of customers and losing millions of dollars. Even if that's the u...
Despite your post being entirely correct, if for a moment we ignore the welfare of humanity and consider the welfare of the United States alone, there is a good chance that this irrational overreaction will be remembered, and that it will serve as deterrence to any aspiring attackers for a hundred years to come.
Sometimes irrational wrath pays, especially if you can inflict pain much more effectively than you need to endure it.
The cost to humanity is probably dominated by some 1,000,000 deaths in Iraq, but the cost to the U.S. at least in terms of deaths is comparatively smaller. The Iraq deaths are an externality.
Despite your post being entirely correct, if for a moment we ignore the welfare of humanity and consider the welfare of the United States alone, there is a good chance that this irrational overreaction will be remembered, and that it will serve as deterrence to any aspiring attackers for a hundred years to come.
On the contrary, this now teaches someone that if they want to do damage to the United States they can easily get it to engage in an autoimmune disorder along with a few oversea adventures.
Moreover, this isn't the only example. Look at how one of...
As a non-US citizen, I can state that the irrational over-reaction was exactly the response that the terrorists were aiming for. Lots of Fear, Uncertainty and Doubt - lots of panic and mindless reaction... it has also greatly debilitated the effectiveness (and no doubt the profitability) of the entire world's air-transport system, without actually enhancing security thereby.
There is no deterrent here
IMO this would not in any way discourage future attackers - but encourage them.
Unquestionably, things get done a lot more by groups of people who are very much alike. Differences in opinions only tend to brake things.
The question is not whether you need people who are different in order to brake the group. The question is whether you're in the right group to begin with. As per Kuhn, things will get done faster and better if members of the group share a lot of commonalities.
If you're in the right group, excluding dissenters will allow you to progress faster. But if you're in the wrong group, then you're going to be making progress towards the wrong things.
Arguing about politics is helping people. If it makes sense that "a bad argument gets a counterargument, not a bullet," then it makes sense that frictions among people's political beliefs should be cooled by allowing everyone to state their case. Not necessarily on this site, but as a general matter, I don't think that talking about politics is either a mind-killer or time-wasting. For me personally it's a motivator both to understand more about the facts, so that I can present arguments; to understand more about other people, so I know why they ...
dearieme: "Given that WWII showed that race could be dynamite, it's surely astonishing that so many rich countries have permitted mass immigration by people who are not only of different race, but often of different religion. Even more astonishing that they've allowed some groups to keep immigrating even after the early arrivers from those groups have proved to be failures, economically or socially. Did anyone predict that 60 years ago?"
I thought that the excessive tolerances and the aversion to distinguish groups of people based on factual diffe...
Louis: "The more recent example is the TV series BattleStar Galactica. Of course it's unrealistic and biased, but it changed my views on the issues of AGI's rights. Can a robot be destroyed without a proper trial? Is it OK to torture it? to rape it? What about marrying one? or having children with it (or should I type 'her')?"
See this: http://denisbider.blogspot.com/2007/11/weak-versus-strong-law-of-strongest_15.html
You are confused because you misinterpret humanity's traditional behavior towards other apparently sentient entities in the first pl...
Well. I finally got around to reading The Unwilling Warlord, and I must say that, despite the world of Ethshar being mildly interesting, the book is disappointment. It builds up nice and well in the first 2/3 of the book, but in the last 1/3, when you expect it to unfold and flourish in some interesting, surprising, revealing manner, Watt-Evans instead decides to pursue the lamest, boringest plot possible, all the while insulting the reader's intelligence.
For the last 1/3 of the book, Watt-Evans attempts to make the eventual reasons for Vond's undoing a &q... (read more)