New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 12:20 PM

I've mentioned before that one of the things that bothers me about virtually all communities is that they consistently wish to say, "Our community is good and yours is bad," and this leads to interpreting things as opposed which should be interpreted as being along the same lines. It seems to me that the irritation which is expressed in some of the comments here with the idea of associating religion and transhumanism is an expression of this tendency, and I therefore find myself irritated by that irritation.

The post links to an essay by a former Christian who says that she found herself attracted to transhumanism for reasons similar to which she was originally attracted to Christianity. Saying that there could not possibly be any similarities of motives or conclusions or consequences or character, because my community must be as distant as possible from yours, does not respect that person's experience, which is just as real even if it is not universal.

Yeah, as if different groups of people couldn't independently have a honest desire to improve their living conditions, even if they have wildly different models of the world, and consequently different strategies to achieve that outcome.

Religious people are irrational, but not evil.

It always annoys me when people try to evaluate ideas from their social context rather than their content. It may or may not be true that transhumanism is a "secular outgrowth of Christian eschatology" or "essentially an argument for intelligent design", but whether it is or not you should still be able to evaluate it as a prediction about the future based on our knowledge of today. It's not like AIs which should work according to the laws of physics are suddenly going to crumble to dust if they're made by people of the wrong religion.

I think there's a rule-of-thumby reading of this that makes a little bit more sense. It's still prejudiced, though.

A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren't going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they're going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn't, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering -- and they're not even superintelligent

I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it's extremely worth pointing out that these are ideas humans love aesthetically. If it's true, great for us, but it's a very pretty version of the truth. Even if I'm wrong, I'm skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn't need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.

(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don't think this defeats my point, because for the conditions of these peoples' narrative to be met, we need the superintelligence to do things we wouldn't have thought of in most of the domains relevant to creating a utopia)

This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.

If you're not convinced that utopian outcomes are even possible, isn't that completely compatible with the claim that utopian futures are not inevitable and low-probability?

Let's try this on religion:

If you are not convinced that heaven is even possible, isn't that completely compatible with Jesus saying that "narrow is the way, which leadeth unto life, and few there be that find it"?

Ummm... not quite.

"low probability possibilities they must work towards"

It's weird to devote your life to something that is impossible / logically inconsistent.

This reminds me how some people notice that superintelligent AI is just another version of Golem... but the same people fail to notice that the ordinary computers around us are already just another version of Golem.

Which further reminds me of Chesterton writing:

Students of popular science [...] are always insisting that Christianity and Buddhism are very much alike [...] The reasons were of two kinds: resemblances that meant nothing because they were common to all humanity, and resemblances which were not resemblances at all. The author solemnly explained that the two creeds were alike in things in which all creeds are alike, or else he described them as alike in some point in which they are quite obviously different. [...] it was gravely urged that [Christ and Buddha], by a singular coincidence, both had to do with the washing of feet. You might as well say that it was a remarkable coincidence that they both had feet to wash.

Is there actually a version of the Golem tale where AI-risk is a theme? I had a look once and I couldn't actually find a version where the Golem fastidiously follows its instructions beyond their intended meaning. Perhaps people are just confusing it with The Sorcerer's Apprentice?

Quite possibly; in which case I would also belong to the set of confused people.

Until I actually looked into this so was I. In my case I think it's Terry Pratchett's fault. In Feet of Clay he describes Golems as being prone to continue with tasks forever unless told to stop.

From the MIRI paper "Intelligence Explosion and Machine Ethics":

Let us call this precise, instruction-following genie a Golem Genie. (A golem is a creature from Jewish folklore that would in some stories do exactly as told [Idel 1990], often with unintended consequences, for example polishing a dish until it is as thin as paper [Pratchett 1996].)

(The "Idel" reference goes to Idel, Moshe. 1990. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. SUNY Series in Judaica. Albany: State University of New York Press.)

It always annoys me when people try to evaluate ideas from their social context rather than their content.

But are you really evaluating the content of transhumanism from outside your social context? Most transhumanists are humanists, and thus can trace their philosophical lineage back through the Enlightenment, the Protestant Reformation, Catholic monks translating Aramaic texts into Latin, Zoroastrians and Muslims translating Greek texts into Aramaic, and Hellenistic post-Socratic philosophers writing their ideas down in reaction to pre-Socratic ideas (and this is just where the paper trail ends). All of that context has helped shaped modern humanism, and through that context humanists have notions of what they consider epistemologically sound and what values they support. These influence how humanists evaluate the content of transhumanism.

At best we might say that because transhumanism was developed by humanists, the humanist interpretation of transhumanism is privileged because it gives perspective on the origins of the ideas, yet that doesn't mean we can't find other contexts in which to make sense of transhumanism. To deny them, or even just be annoyed by them, is to exert pressure against the very process that generated transhumanism in the first place: successive reinterpretation and expansion of ideas that have their origins in pre-Socratic Hellenism.

There is no way to consider transhumanism, or any idea, outside of a context; to do so is to blind oneself to the lens through which one sees the world.

and this is just where the paper trail ends

If I remember this correctly, the writing itself -- without which, there could be no paper trail -- was invented by Phoenicians.

Phoenicians also invented money. Peter Thiel has a lot of money, and he supports transhumanism. He also supports Donald Trump.

...just adding more context...

My sentence

It always annoys me when people try to evaluate ideas from their social context rather than their content.

Contains a grammatical ambiguity; the first "their" could refer to the people or the ideas. I meant it to refer to the ideas. I'm not asking people to stop using their own social norms when they judge ideas. I am saying that the society from which an idea originated is irrelevant to judging the truth of that idea. (At least once you've fully understood what the idea is. Before that you might need to understand its context in order to resolve ambiguities in the description of the idea.)

So I'm not claiming that I'm not biased by my cultural heritage (although of course I aspire to be unbiased), I'm just saying that transhumanism shouldn't be attacked or defended based on its heritage.

Will the machine deity require you to accept Christ as your savior before letting you become a transhuman? No? Then why the hell is that written in the bronze age book that you claim knowingly predicted this outcome?

The classic idea of heaven looks like a post-scarcity, post-death society because that's what we've always imagined would be nice. It's not divine prophecy, just something common to humanity, and we've done a lot of ignoring religious "answers" to get there. I resent that religious people would try to co-opt all this work and at this late date contemplate the idea of a digital entity with a "soul."

I resent that religious people would try to co-opt all this work and at this late date contemplate the idea of a digital entity with a "soul."

At the risk of being rude, this sounds more like your problems than theirs. I'm not sure religious transhumanists are even that late to the party: we happen to be part of a community that got there very early and has been slowly prepping the party so it'll be ready when folks arrive. Maybe religious folks want to dance to different music than we do and you might find that annoying, but is that better than no one showing up to the party at all? And if we don't like it we can always go hang out in a room upstairs for a while without leaving, because the music will eventually change. It always does.

My perspective is that religious folk have not been prepping the party. Scientists have been trying to get some instruments together to make some music, but the religious people keep grabbing guitars, smashing them, and calling it music. Then, when the music finally starts up despite all the smashed instruments, religious folks say "oh hey, that's what we were trying to do, you're welcome everybody."

As soon as something conveniently fits the religious narrative (appropriately tortured beyond its original construction), it gets incorporated. I find that frustrating, as it should instead shatter the narrative and reveal it for the useless pile of dogma that it is.

Most scientists are not extropian in any sense - so if they have been "prepping the party" it was not deliberate. Are you considering scientists and religious folk as disjoint sets?

This is a good point. It's hardly surprising that the utopia we fantasise about is the same as the one we try to create.

Then why the hell is that written in the bronze age book that you claim knowingly predicted this outcome?

The New Testament is not really a bronze age book. Wikipedia states that the bronze age ended in the near east region around 1200 BC.

For comparison, here are Robin Hanson's thoughts on some Mormon transhumanists: http://www.overcomingbias.com/2017/04/mormon-transhumanists.html