Suffice it to say that I think the above is a positive move ^.^
I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.
Of course not. The victim was the girl he murdered.
That's the point of the chapter title - he had something to atone for. It's what tvtropes.org calls a Heel Face Turn.
A type 2 supernova emits most of its energy in the form of neutrinos; these interact with the extremely dense inner layers that didn't quite manage to accrete onto the neutron star, depositing energy that creates a shockwave that blows off the rest of the material. I've seen it claimed that the neutrino flux would be lethal out to a few AU, though I suspect you wouldn't get the chance to actually die of radiation poisoning.
A planet the size and distance of Earth would intercept enough photons and plasma to exceed its gravitational binding energy, though I'm skeptical about whether it would actually vaporize; my guess for what its worth is that most of the energy would be radiated away again. Wouldn't make any difference to anyone on the planet at the time of course.
Well-chosen chapter title, and good wrapup!
The point is, that the Normal Ending is the most probable one.
Historically, humans have not typically surrendered to genocidal conquerors without an attempt to fight back, even when resistance is hopeless, let alone when (as here) there is hope. No, I think this is the true ending.
Nitpick: eight hours to evacuate a planet? I think not, no matter how many ships you can call. Of course the point is to illustrate a "shut up and multiply" dilemma; I'm inclined to think both horns of the dilemma are sharper if you change it to eight days.
But overall a good ending to a good story, and a rare case where a plot is wrapped up by the characters showing the spark of intelligence. Nicely done!
You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.
I'm not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal. If that is not the case (which depends on the mechanics of the Alderson drive) then either detonate the local star immediately, or the star one hop back.
Hmm. The three networks are otherwise disconnected from each other? And the Babyeaters are the first target?
Wait a week for a Superhappy fleet to make the jump into Babyeater space, then set off the bomb.
(Otherwise, yes, I would set off the bomb immediately.)
Either way though, there would seem to be a prisoner's dilemma of sorts with regards to that. I'm not sure about this, but let's say we could do unto the Babyeaters without them being able to do unto us, with regards to altering them (even against their will) for the sake of our values. Wouldn't that sort of be a form of Prisoner's Dilemma with regards to, say, other species with different values than us and more powerful than us that could do the same to us? Wouldn't the same metarationality results hold? I'm not entirely sure about this, but..
I'm inclined to think so, which is one reason I wasn't in favor of going to war on the Babyeaters: what if the next species who doesn't share our values is stronger than us, how would I have them deal with us? what sort of universe do we want to live in?
(Another reason being that I'm highly skeptical of victory in anything other than a bloody war of total extermination. Consider analogous situations in real life where atrocities are being committed in other countries, e.g. female circumcision in Africa; we typically don't go to war over them, and for good reason.)
Good story! It's not often you see aliens who aren't just humans in silly make up. I particularly liked the exchange between the Confessor and the Kiritsugu.
Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:
The idea being that if you can't know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.
Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn't work for Kasparov in life. If you try to predict Kasparov's actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.
In hindsight we shouldn't really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.
But if not - if this world indeed ranks lower in my preference ordering, just because I have better scenarios to compare it to - then what happens if I write the Successful Utopia story?
Try it and see! It would be interesting and constructive, and if people still disagree with your assessment, well then there will be something meaningful to argue about.
Well, I like the 2006 version better. For all that it's more polemic in style -- and if I recall correctly, I was one of the people against whom the polemic was directed -- it's got more punch. After all, this is the kind of topic where there's no point in even pretending to be emotionless. The 2006 version alloys logic and emotion more seamlessly.