I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.
I don't believe that it's mainstream transhumanist thought, in part because most people who'd call themselves transhumanists have not been exposed to the relevant arguments.
Does that help? No?
The problem with this vision of the future is that it's nearly basilisk-like in its horror. As you said, you had a panic attack; others will reject it out of pure denial that things can be this bad, or perform motivated cognition to find reasons why it won't actually happen. What I've never seen is a good rebuttal.
If it's any consolation, I don't think the possibility really makes things that much worse. It constrains FAI design a little more, perhaps, but the no-FAI futures already looked pretty bleak. A good FAI will avoid this scenario right along with all the ones we haven't thought of yet.
I distinctly remember, at some point in my teens, realizing that other people sometimes thought like me and I could model their reactions as something more than inscrutable environmental hazards. So there's that.
I'm going to recommend the Muv Luv series of visual novels. This is a military sci-fi story where the lead of a slice-of-life harem/romcom finds himself in a war-torn alternate timeline, and has to learn to pilot giant robots in order to fight alongside battle-hardened versions of the girls from his own world. There are 3 parts: Muv Luv (romcom), Muv Luv Unlimited (darker military focus), and Muv Luv Alternative (very dark military/war). Muv Luv Alternative is the #1 ranked VN on vndb.org, with an average score of 9.28. Apparently it's also the highest ranked on the equivalent Japanese site.
The series has many virtues, but also some major caveats.
I'll start with the good stuff:
- Amazing battle sequences - the animation is very advanced for a VN and the best battles are among some of the most tense and exciting I've experienced in any medium.
- Interesting exploration of duty and identity in the politics arc.
- Fun character interactions and humor.
- Many very emotionally compelling moments - this story made me cry more than any other I've experienced (for both happy and sad reasons).
- Pretty smart writing in general - characters don't obviously hold the idiot ball unless they're supposed to be idiots or are acting emotionally at the time.
- Great world-building with well thought-through military/technical elements.
- Great character development for the main character.
- Amazing, film-quality music (more so in Alternative).
Downsides:
- Extremely long overall - at least about 80 hours for the whole thing.
- The first part ("Muv Luv") is a very silly and fluffy romcom, and while it's often amusing, it's over-long and some parts really drag (ugh, the lacrosse). By contrast to Alternative's 9.28 average score and #1 ranking, this only got 7.14 and #476, and shouldn't really be skipped due to important character and plot setup.
- Only available in English via an old pirated version that requires some technical fiddling to get it to work (comes with a readme to explain this).
- Has gratuitous sexual/fanservice elements.
- Some of the tear-jerker moments may come across as heavy-handed or manipulative if you're not in the right mood for them.
Neutral elements:
- Not heavily decision driven - the main routes in Muv Luv are extremely obvious, and Alternative only has one route.
- The prose (and/or translation thereof) is quite simple and straightfoward, compared to eg Fate/stay night.
I wasn't sure whether to post about this because of the downsides, but I read it a couple of months ago and it's still on my mind enough to feel worth recommending. I'd strongly recommend it if you have the time to spare and like both military sci-fi and emotion-heavy stories.
Content warning: Explicit sex and violence.
On the flip side there's Luv and Hate, which is an (incomplete! still good) rewrite of the Muv-Luv Alternate story with a guest protagonist from... Supreme Commander. Including the ACU.
It's well-written, mainly character-focused with a few amusing combat interludes, and oh so gratifying after attempting to read the grimdark original.
It's also a quest. If this doesn't mean anything to you folks... don't worry about it, you can treat it as an ordinary story if you wish.
I recently encountered this very disturbing blog post arguing that there's an "energy trap" in using energy sources like wind, solar and nuclear because even as they may have high enough energy return on energy investment, since the energy return is spread out over many years, switching to them results in an energy investment that doesn't pay back quickly enough if one is trying to switch to a non-fossil fuel based economy. I'm not completely sure I buy into it: it seems like it assumes a very narrow range of EROEIs and even small improvements in the efficiency end might not lead to this problem. It is also possible that other improvements in energy use (e.g. more efficient cars and better battery technology) could help evade this sort of thing. But I'm not sure enough to evaluate the argument strongly one way or another. Thoughts?
We won't run out of coal anytime soon. It has other issues, but I think that invalidates his conclusion—coal power plants are pretty cheap, and are already being built.
I'm also more optimistic about politicians. Ten years may be beyond their reelection horizon, but it's not beyond their "This place is going to hell"-horizon.
Short answer:
Donate to MIRI, or split between MIRI and GiveWell charities if you want some fuzzies for short-term helping.
Long answer:
I'm a negative utilitarian (NU) and have been thinking since 2007 about the sign of MIRI for NUs. (Here's some relevant discussion.) I give ~70% chance that MIRI's impact is net good by NU lights and ~30% that it's net bad, but given MIRI's high impact, the expected value of MIRI is still very positive.
As far as your question: I'd put the probability of uncontrolled AI creating hells higher than 1 in 10,000 and the probability that MIRI as a whole prevents that from happening higher than 1 in 10,000,000. Say such hells used 10^-15 of the AI's total computing resources. Assuming computing power to create ~10^30 humans for ~10^10 years, MIRI would prevent in expectation ~10^18 hell-years. Assuming MIRI's total budget ever is $1 billion (too high), that's ~10^9 hell-years prevented per dollar. Now apply rigorous discounts to account for priors against astronomical impacts and various other far-future-dampening effects. MIRI still seems very promising at the end of the calculation.
Okay. I'm sure you've seen this question before, but I'm going to ask it anyway.
Given a choice between
- A world with seven billion mildly happy people, or
- A world with seven billion minus one really happy people, and one person who just got a papercut
Are you really going to choose the former? What's your reasoning?
What evidence do we have about whether cryonics will work for those who die of Alzheimer's?
If you have Alzheimer's, and you want to use cryonics, you should do your very best to get frozen well before you die of the disease.
This is problematic in all jurisdictions I can think of. Even where euthanasia is legal, I don't know of any cryonics organisations taking advantage, and there might be problems for them if they do. I'd very much like to be proven wrong in this.
His publishers say he died of natural causes surrounded by his family with his cat on his lap.
It's a suspiciously pleasant way to go, but I see no reason to look more closely at this. Let's just be happy he got the end he wanted.
[LINK] Terry Pratchett is dead
I'm sure I'm not the only one who greatly admired him. The theme of his stories was progress; they were set in a fantasy world, it's true, but one that was frequently a direct analogy to our own past, and where the golden age was always right now. The recent books made this ever more obvious.
We have lost a great man today, but it's the way he died that makes me uncomfortable. Terry Pratchett had early-onset Alzheimer's, and while I doubt it would have mattered, he couldn't have chosen cryonics even if he wanted to. He campaigned for voluntary euthanasia in cases like his. I will refrain from speculating on whether his unexpected death was wholly natural; whether it was or wasn't, I can't see this having a better outcome. In short...
There is, for each of us, a one-ninth chance of developing Alzheimer's if we live long enough. Many of us may have relatives that are already showing signs, and in the current regime these relatives cannot be cryonically stored even if they wish to try; by the time they die, there will be little purpose in doing so. For cryonics to help for neurodegenerative disorders, it needs to be applied before they become fatal.
Is there anything we can do to change that? Are there countries in which that generalisation is false?
TV and Movies (Animation) Thread
Saenai Heroine no Sodatekata.
It's an anime about... making a game... that appears fully congruent with the contents of the anime...
In short, it seems to be a metacircular anime. It's worth watching because of the way it plays with tropes, and the origin of those tropes; it's marginally annoying in that many of the tropes it plays with are of the harem genre. There may be something more going on in the background, but I haven't watched enough to tell. It may be especially interesting to people who have long experience with japanese animation.
The first episode is fully representative, so I'd recommend having a look if the above appeals.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
In addition to what James said, I'm reminded of the mechanism to change screen resolution in Windows XP: It automatically resets to its original resolution in X seconds, in case you can't see the screen. This is so people can't break their computers in one moment of weakness.
A similar thing could be done with self-modification. Self-destruction would still be possible, of course, just as it is now (I could go jump off of a bridge). But just as suicide is something that is built up to in humans, failsafes could be put in place so self-modification was equally deliberate.
But you are absolutely allowed to break your computer in "one moment of weakness"; it isn't even hard. The reason for that dialog is because the computer honestly, genuinely can't predict if the new screen mode will work.