What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?
Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.
You remind me of myself maybe 15 years ago. Excited about the idea of escaping the human condition through advanced technology, but with the idea of avoiding bad (often apocalyptically bad) outcomes also in the mix; wanting the whole world to get excited about this prospect; writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a "positive Singularity" might be, except a future where the good things happen and the bad things don't.
So let me see if I have anything coherent to say about such an outlook, from the perspective of 15 years on. I am certainly jaded when it comes to breathless accounts of the incomprehensible transcendence that will occur: the equivalent of all Earth's history happening in a few seconds, societies of inhuman meta-minds discovering the last secret of how the cosmos works and that's just the beginning, passages about how a googol intelligent beings will live inside a Planck length and so forth.
If you haven't seen them, you should pay a visit to Dale Carrico's writings on "superlative futurology". Whatever the future may bring, it's a fact that this excited anticipation of everything good multiplied by a trillion (or terrified anticipation of badness on a similar scale, if we decide to entertain the negative possibilities) is built entirely from imagination. It is not surprising that after more than a decade, I have become skeptical about the value of such emotional states, and also about their realism; or at least, a little bored with them. I find myself trying to place them in historical perspective. 2000 years ago there were gnostics raving about transcendental, sublime hierarchies of gods, and how mind, time, and matter were woven together in strange ways. History and science tell us that all that was mostly just a strange conceptual storm happening in the skulls of a few people who died like anyone else and who made little discernible impact on the course of events - that being reserved more for the worldly actors like the emperors and generals. Yet one has to suppose that gnosticism was not an accident, that it was a symptom of what was happening to culture and to human consciousness at that time.
It seems very possible that a great deal of the ecstasy (leavened with dread) that one finds in singularity and transhumanist writing is similarly just an epiphenomenal symptom of the real processes of the age. Lots of people say that, of course; it's the capitalist ego running amok, denying ecological limits, a new gnostic body-denial that fetishizes calculating machines, blah blah blah. Such criticisms themselves tend to repress or deny the radicalism of what is happening technologically.
So, OK, there shall be robots, cyborgs, brain implants, artificial intelligence, artificial life, a new landscape of life and mind which gets called postbiological or posthuman but much of which is just hybridization of natural and artificial. All that is a huge development. But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is; millions of subjective years of posthuman civilizations squeezed into a few seconds; and various other quantitative amplifications of life as we know it, by large powers of ten?
I think at best it is rational to give these ideas a chance. These technologies are new, this hasn't happened before, we don't know how far it goes; so we might want to remain open to the possibility that almost infinite space and time lie on the other side of this transition. But really, open to the possibility is about all we can say. This hasn't happened before, and we don't know what new barriers and pitfalls lie ahead; and it somehow seems unhealthy to be deriving this ecstatic hope from a few exponential numbers.
Something that the critics of extreme transhumanism often fail to note is the highly utopian altruism that exists within the subculture. To be sure, there are many individualist transhumanists who are cynics and survivalists; but there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies (those which possess no trace of the desire to punish or to achieve transformation through violence). It's the dream of world peace, raised to the nth power, and achieved because there's no death, scarcity, involuntary work, ageing process, and other such pains and frustrations to drive people mad. I wanted to emphasize this aspect because the critics of singularity thought generally love to explain it by imputing disreputable motives: it's all adolescent power fantasy and death denial and so forth. There should be a little more respect for this aspect, and if they really think it's impossible, they should show a little more regret about this. (Incidentally, Carrico, who I mentioned above, addresses this aspect too, saying it's a type of political infantilism, imagining that conflict and loss can be eliminated from the world.)
The idea of "waking up the world" to the imminence of the Singularity, to its glories and terrors, can have an element of this profoundly unworldly optimism about human nature - along with the more easily recognized aspect of self-glorification: I, and maybe my colleagues and guru figures, am the messenger of something that will gain the attention of the world. I think it can be expected that the world will continue to "wake up" to the dawning possibilities of biological rejuvenation, artificial intelligence, brain emulation, and so on, and that it will do this not just in a sober way, but also with bursts of zany enthusiasm and shuddering terror; and it even makes sense to want to foster the sober advance of understanding, if only we can figure out what's real and what's illusion about these anticipations.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the "knowledge" of immortality through mind uploading (just one example)... that, almost certainly, achieves nothing deeply useful. And the expectation that in a few years everyone will agree with the Singularity outlook (I've seen this idea expressed most recently by the economist James Miller) I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?! It's a logical deduction: you understand the possibilities of the Singularity, you don't understand how anyone could want to reject them or dismiss them, and you observe that most people are not singularity futurists; therefore, you deduce that the idea is about to sweep the world like wildfire, and you just happen to be one of the lucky first to be exposed to it. That thought process is naivety and unfamiliarity with normal psychology. It may partly be due to a person of above-average intelligence not understanding how different their own subjectivity is to that of a normal person; it may also be due to not yet appreciating how incredibly cruel life can be, and how utterly helpless people are against this. The passivity of the human race, its resignation and wishful thinking, its resistance to "good news", is not an accident. And there is ample precedent for would-be vanguards of the future finding themselves powerless and ignored, while history unfolds in a much duller way than they could have imagined.
So much for the general cautionary lecture. I have two other more specific things to say.
First, it is very possible that the quasi-scientific model of mind which underlies so many of these brave new ideas about copies and mind uploads is simply wrong, a sort of passing historical crudity that will be replaced by something very new. The 19th century offers many examples in physics and biology of paradigms which informed a whole generation of thought and futurology, and which are now dead and forgotten. Computing hardware is a fact, but consciousness in a program is not yet a fact and may never be a fact. I've posted a lot about this here.
Second, since you're here, you really should think about whether something like the SIAI notion of friendly singularity really is the only natural way to achieve a "global positive singularity". The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to promote the idea of immortality through mind uploading, or reverse engineering the brain. I think it's a genuine conceptual advance on the older idea of hoping to ride the technological wave to a happy ending, just by energetic engagement with new developments and a will to do whatever is necessary. We still don't know if the premises of such futurisms are valid, but if they are accepted as such, then the SIAI strategy is a very reasonable one.
writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a "positive Singularity" might be, except a future where the good things happen and the bad things don't.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video ...
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.
If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread. Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.
If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.
A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box. This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)