Suffice it to say that I think the above is a positive move ^.^
I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.
Of course not. The victim was the girl he murdered.
That's the point of the chapter title - he had something to atone for. It's what tvtropes.org calls a Heel Face Turn.
And at the same time, they were both victims, as are we all, of human nature. Never let it be said that if you are a victim, you are only a victim.
A type 2 supernova emits most of its energy in the form of neutrinos; these interact with the extremely dense inner layers that didn't quite manage to accrete onto the neutron star, depositing energy that creates a shockwave that blows off the rest of the material. I've seen it claimed that the neutrino flux would be lethal out to a few AU, though I suspect you wouldn't get the chance to actually die of radiation poisoning.
A planet the size and distance of Earth would intercept enough photons and plasma to exceed its gravitational binding energy, though I'...
The point is, that the Normal Ending is the most probable one.
Historically, humans have not typically surrendered to genocidal conquerors without an attempt to fight back, even when resistance is hopeless, let alone when (as here) there is hope. No, I think this is the true ending.
Nitpick: eight hours to evacuate a planet? I think not, no matter how many ships you can call. Of course the point is to illustrate a "shut up and multiply" dilemma; I'm inclined to think both horns of the dilemma are sharper if you change it to eight days.
But overall...
You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.
I'm not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal. If that is not the case (which depends on the mechanics of the Alderson drive) then either detonate the local star immediately, or the star one hop back.
Hmm. The three networks are otherwise disconnected from each other? And the Babyeaters are the first target?
Wait a week for a Superhappy fleet to make the jump into Babyeater space, then set off the bomb.
(Otherwise, yes, I would set off the bomb immediately.)
Either way though, there would seem to be a prisoner's dilemma of sorts with regards to that. I'm not sure about this, but let's say we could do unto the Babyeaters without them being able to do unto us, with regards to altering them (even against their will) for the sake of our values. Wouldn't that sort of be a form of Prisoner's Dilemma with regards to, say, other species with different values than us and more powerful than us that could do the same to us? Wouldn't the same metarationality results hold? I'm not entirely sure about this, but..
I'm incli...
Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:
The idea being that if you can't know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.
Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn't w...
But if not - if this world indeed ranks lower in my preference ordering, just because I have better scenarios to compare it to - then what happens if I write the Successful Utopia story?
Try it and see! It would be interesting and constructive, and if people still disagree with your assessment, well then there will be something meaningful to argue about.
An amusing if implausible story, Eliezer, but I have to ask, since you claimed to be writing some of these posts with the admirable goal of giving people hope in a transhumanist future:
Do you not understand that the message actually conveyed by these posts, if one were to take them seriously, is "transhumanism offers nothing of value; shun it and embrace ignorance and death, and hope that God exists, for He is our only hope"?
I didn't get that impression, after reading this within the context of the rest of the sequence. Rather, it seems like a warning about the importance of foresight when planning a transhuman future. The "clever fool" in the story (presumably a parody of the author himself) released a self-improving AI into the world without knowing exactly what it was going to do or planning for every contingency.
Basically, the moral is: don't call the AI "friendly" until you've thought of every single last thing.
If existential angst comes from having at least one deep problem in your life that you aren't thinking about explicitly, so that the pain which comes from it seems like a natural permanent feature - then the very first question I'd ask, to identify a possible source of that problem, would be, "Do you expect your life to improve in the near or mid-term future?"
Saved in quotes file.
The way stories work is not as simple as Orson Scott Card's view. I can't do justice to it in a blog comment, but read 'The Seven Basic Plots' by Christopher Booker for the first accurate, comprehensive theory of the subject.
"I'd like to see a study confirming that. The Internet is more addictive than television and I highly suspect it drains more life-force."
If you think that, why haven't you canceled your Internet access yet? :P I think anyone who finds it drains more than it gives back, is using it wrong. (Admittedly spending eight hours a day playing World of Warcraft does count as using it wrong.)
"But the media relentlessly bombards you with stories about the interesting people who are much richer than you or much more attractive, as if they actually constituted a large fraction of the world."
This seems to be at least part of the explanation why television is the most important lifestyle factor. Studies of factors influencing both happiness and evolutionary fitness have found television is the one thing that really stands out above the noise -- the less of it you watch, the better off you are in every way.
The Internet is a much better way...
"The increase in accidents for 2002 sure looks like a blip to me"
Looks like a sustained, significant increase to me. Let's add up the numbers. From the linked page, total fatalities 1997 to 2000 were 167176. Total fatalities 2002 to 2005 were 172168. The difference (by the end of 2005, already nearly 3 years ago) is about 5000, more than the total deaths in the 9/11 attacks.
Eliezer,
I was thinking in terms of Dyson spheres -- fusion reactor complete with fuel supply and confinement system already provided, just build collectors. But if you propose dismantling stars and building electromagnetically confined fusion reactors instead, it doesn't matter; if you want stellar power output, you need square AUs of heat radiators, which will collectively be just as luminous in infrared as the original star was in visible.
Eliezer,
It turns out that there are ways to smear a laser beam across the frequency spectrum while maintaining high intensity and collimation, though I am curious as to how you propose to "pull a Maxwell's Demon" in the face of beam intensity such that all condensed matter instantly vaporizes. (No, mirrors don't work. Neither do lenses.)
As for scattering your parts unpredictably so that most of the attack misses -- then so does most of the sunlight you were supposedly using for your energy supply.
Finally, "trust but verify" is not a new...
Carl,
If "singleton" is to be defined that broadly, then we are already in a singleton, and I don't think anyone will object to keeping that feature of today's world.
Note that altruistic punishment of the type I describe may actually be beneficial, when done as part of a social consensus (the punishers get to seize at least some of the miscreant's resources).
Also note that there may be no such thing as evolved hardscrabble replicators; the number of generations to full colonization of our future light cone may be too small for much evolution to take place. (The log to base 2 of the number of stars in our Hubble volume is quite small, after all.)
I have tended to focus on meta level issues in this sort of context, because I know from experience how untrustworthy our object level thoughts are.
For example, there's a really obvious non-singleton solution to the "serial killer somehow creates his own fully populated solar system torture chamber" problem: a hundred concerned neighbors point Nicoll-Dyson lasers at him and make him an offer he can't refuse. It's a simple enough solution for a reasonably bright five year old to figure out in 10 seconds; the fact that I didn't figure it out for mo...
Tim -- I looked at your essay just now, and yes, your Visualization of the Cosmic All seems to agree with mine. (I think Robin's model also has some merit, except that I am not quite so optimistic about the timescales, and I am very much less optimistic about our ability to predict the distant future.)
Now, I should clarify that I don't really expect Moore's Law to continue forever. Obviously the more you extrapolate it, the shakier the prediction becomes. But there is no point at which some other prediction method becomes more reliable. There is no time in the future about which we can say "we will deviate from the graph in this way", because we have no way to see more clearly than the graph.
I don't see any systematic way to resolve this disagreement either, and I think that's because there isn't any. This shouldn't come as a surprise -- if I ...
"To stick my neck out further: I am liable to trust the Weak Inside View over a "surface" extrapolation, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided."
But there's the question of whether the balance of support is sufficiently lopsided, and if so, on which side. Your example illustrates this nicely:
"I will go ahead and say, "I don't care if you say that Moore's Law has held for the last hundred years. Human thought was a primary causal force in producing Moo...
"Not sure I see your point. All the high speed connections were built long before bittorrent came along, and they were being used for idiotic point-to-point centralised transfers."
No they weren't. The days of Napster and Bit Torrent were, by no coincidence, also the days when Internet speed was in the process of ramping up enough to make them useful.
But of course, the reason we all heard of Napster wasn't that it was the first peer-to-peer data sharing system. On the contrary, we heard of it because it came so late that by the time it arrived, th...
"If you'd asked me in 1995 how many people it would take for the world to develop a fast, distributed system for moving films and TV episodes to people's homes on an 'when you want it, how you want it' basis, internationally, without ads, I'd have said hundreds of thousands."
And you'd have been right. (Ever try running Bit Torrent on a 9600 bps modem? Me neither. There's a reason for that.)
"Russell, I think the point is we can't expect Friendliness theory to take less than 30 years."
If so, then fair enough -- I certainly don't claim it will take less.
"So I'm just mentioning this little historical note about the timescale of mathematical progress, to emphasize that all the people who say "AI is 30 years away so we don't need to worry about Friendliness theory yet" have moldy jello in their skulls."
It took 17 years to go from perceptrons to back propagation...
... therefore I have moldy Jell-O in my skull for saying we won't go from manually debugging buffer overruns to superintelligent AI within 30 years...
Eliezer, your logic circuits need debugging ;-)
(Unless the comment was directed...
Robin -- because it needs to be more specific. "Always be more afraid of bad things happening" would reduce effectiveness in other areas. Even "always be more afraid of people catching you and doing bad things to you" would be a handicap if you need to fight an enemy tribe. The requirement is, specifically, "don't violate your own tribe's ethical standards".
odf23ds: "Ack. Could you please invent some terminology so you don't have to keep repeating this unwieldy phrase?"
Well, there are worse things than an unwieldy phrase! Consider how many philosophers have spent entire books trying to communicate their thoughts, and still failed. Looked at that way, Jef's phrase has a very good ratio of length to precision.
Excellent post!
As for explanation, the way I would put it is that ethics consists of hard-won wisdom from many lifetimes, which is how it is able to provide me with a safety rail against the pitfalls I have yet to encounter in my single lifetime.
anki -- "probability estimate" normally means explicit numbers, at least in the cases I've seen the term used, but if you prefer, consider my statement qualified as "... in the form of numerical probability".
anki --
Throughout the experiment, I regarded "should the AI be let out of the box?" as a question to be seriously asked; but at no point was I on the verge of doing it.
I'm not a fan of making up probability estimates in the absence of statistical data, but my belief that no possible entity could persuade me to do arbitrary things via IRC is conditional on said entity having only physically ordinary sources of information about me. If you're postulating a scenario where the AI has an upload copy of me and something like Jupiter brain hardware to run a zillion experiments on said copy, I don't know what the outcome would be.
"How do we know that Russell Wallace is not a persona created by Eliezer Yudkowski?"
Ron -- I didn't let the AI out of the box :-)
Silas -- I can't discuss specifics, but I can say there were no cheap tricks involved; Eliezer and I followed the spirit as well as the letter of the experimental protocol.
"I have a feeling that if the loser of the AI Box experiment were forced to pay thousands of dollars, you would find yourself losing more often."
David -- if the money had been more important to me than playing out the experiment properly and finding out what would really have happened, I wouldn't have signed up in the first place. As it turned out, I didn't have spare mental capacity during the experiment for thinking about the money anyway; I was sufficiently immersed that if there'd been an earthquake, I'd probably have paused to integrate it into the scene before leaving the keyboard :-)
"But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good?"
You are asking why anyone would choose life rather than what is good. Inclusive genetic fitness is just the long term form of life, as personal survival is the short-term form.
The answer is, of course, that one should not. By definition, one should always choose what is good. However, while there are times when it is right to give up one's life for a greater good, they are the exception. Most of the time, life is a subgoal of what is good, so there is no conflict.
I was curious about the remark that simulation results differed from theoretical ones, so I tried some test runs. I think the difference is due to sexual reproduction.
Eliezer's code uses random mating. I modified it to use asexual reproduction or assortative mating to see what difference that made.
Asexual reproduction:
mutation rate 0.1 gave 6 bits preserved
0.05 preserved 12-13 bits
0.025 preserved 27
increasing population size from 100 to 1000 bumped this to 28
decreasing the beneficial mutation rate brought it down to 27 again
so the actual preserved i...
Well, I like the 2006 version better. For all that it's more polemic in style -- and if I recall correctly, I was one of the people against whom the polemic was directed -- it's got more punch. After all, this is the kind of topic where there's no point in even pretending to be emotionless. The 2006 version alloys logic and emotion more seamlessly.