I second Tim's post. Dawkins was really onto something with the idea of memes. As we learn more about the intricacies of how dopamine works, we'll come closer to developing a more robust way of talking about memes.
I'm with Manon, I'd take the deal. I'm assuming that there would be some bailout money to help with the freedom from pain effort. I wouldn't think that this would make Akon (my only MST3K'ing will be to publicly wince at that name) a traitor to the human race.
Eliezer,
Wouldn't the answer to this and other dystopias-posing-as-utopias be the expansion of conscious awareness a la Accelerando? Couldn't Steve be augmented enough to both enjoy his life with Helen and his new found verthandi? It seems like multiple streams of consciousness, one enjoying the catlair, another the maiden in distress, and yet another the failed utopia that is suburbia with Helen would allow Mr. Glass a pleasant enough mix. Some would be complete artificial life fictions, but so what?
Aaron
Towards the end of the essay, Orwell writes:
"The real objective of Socialism is human brotherhood. This is widely felt to be the case, though it is not usually said, or not said loudly enough. Men use up their lives in heart-breaking political struggles, or get themselves killed in civil wars, or tortured in the secret prisons of the Gestapo, not in order to establish some central-heated, air-conditioned, strip-lighted Paradise, but because they want a world in which human beings love one another instead of swindling and murdering one another. And they want that world as a first step. Where they go from there is not so certain, and the attempt to foresee it in detail merely confuses the issue."
Is there a similar transhumanist objective? Is trying to see everything in detail causing confusion?
Eliezer,
I have to question your literary interpretation of the Culture. Is Banks' intention really to show an idealized society? I think the problem of the Minds that you describe is used by Banks to show the existential futility of the Culture's activities. The Culture sans Minds would be fairly run-of-the-mill sci-fi. With all of its needs met (even thinking), it throws into question every action the Culture takes, particularly the meddlesome ones. That's the difference between Narnia and the Culture; Aslan has a wonderful plan for the childrens' lives, whereas the Culture really has nothing to do but avoid boredom. The Romantic Ideals (High Challenge, Complex Novelty) you espouse are ultimately what is being attacked by what I see as Banks' Existential ones. I think you can take the transhumanism out of the argument and just debate the ideas, since we aren't yet at the point of being infinitely intelligent, immortal, etc.
Aaron
I'm just trying to get the problem you're presenting. Is it that in the event of a foom, a self-improving AI always presents a threat of having its values drift far enough away from humanity's that it will endanger the human race? And your goal is to create the set of values that allow for both self-improvement and friendliness? And to do this, you must not only create the AI architecture but influence the greater system of AI creation as well? I'm not involved in AI research in any capacity, I just want to see if I understand the fundamentals of what you're discussing.
AW,
It seems like it gets into a similar arena as assisted suicide when done prior to death. I'm just trying to think of like minded groups that could form a useful coalition. Stem cell researchers (and those who support them) also seem like good allies.
To hopefully tie back to the thread, voting is but one facet of influence. What you do the other 364 (363 including primaries) has a huge impact (hence the usefulness of knowing your representatives).
What would effective cryo policy look like? Or conversely, what in current policy is inhibiting the proper development of cryogenics?
Carrier's system still seems to create a circular situation where the smaller parts we reduce larger things into continue, in a sense, to be mental constructions. Electrons behave in ways Einstein called "spooky", and it takes very sophisticated systems to describe them, and then, the descriptions are probabilistic. The important thing is that we're still observing something, whereas the supernatural is basically a collection of spectacular reports that cannot be verified. How much greater would it be to have a third eye to read people's thoughts, in addition to the amazing eye you described? Would you really be bored by it, if you knew there was some bizarre quantum explanation? Heck, New Age types make appeals to all sorts of seemingly reductionist explanations. The problem isn't that the supernatural concept can't be broken down beyond one's mental process, it's that there is no phenomena beyond the person's mental process in the first place. Carrier's distinction only seems to ensure that you're debating pseudo-scientists instead of supernaturalists.
I recently had an argument at work over Barrack Obama's supposedly unverifiable Hawaiian birth certificate. I accessed all of the various websites showing the certificates authenticity. But everything I said or pulled up on the internet was met with disbelief. There was some alternate person my co-worker could point to saying the opposite. This was a debate on the reality of a perfectly mundane, real object. When people stop looking at the objective, outside world, there's unfortunately no philosophical argument that will pull them back in.
Psychology (evolutionary or otherwise) seems to be merging with economics already, what with Kahneman getting the Nobel and all. The problem is how much can the intricacies of psychologies be simplified so that behavior can be modeled accurately. Starting off with a simple model of self-interest seems like it was a pretty good start, all things considered.