As a part of public relations, I think it's important to keep tabs on how the Singularity and related topics (GAI, FAI, life-extension, etc.) are presented in the culture at large.  I've posted links to such things in the past, but I think there should be a central clearinghouse, and a discussion-level post seems like the right place. 

So: in the comments, post examples of references to Singularity-related topics that you've found, ideally with a link and a few sentences' description of what the connection is and how it's presented (whether seriously or as an object of ridicule, for instance). 

 

There should probably be a similar post for rationality references, but let's see how this one goes first.

New Comment
49 comments, sorted by Click to highlight new comments since:

Too many SMBC comics to get all of them in one post, but here are four recent ones:

#1 The Fermi Paradox is resolved with reference to wireheading.

#2 About mind uploading.

#3 A different kind of singularity, and (naive) Fun Theory.

#4 Making fun of aging singularitarians.

  • #2041 has exponentially increasing human lifespan as a background assumption.

  • #2070 argues that we'll never completely leave behind the regrettable parts of our past.

  • #2072, #2116's coda and #2305 are on how societies fail to transition to post-scarcity.

  • #2123 relates to timeless reasoning.

  • #2124 looks like a malicious simulator intervening in reality to cause a negative singularity and make it look like humans' fault. LW thread here.

  • #2125 is on "Why truth?" (throwing it in since there's no rationality-in-the-zeitgeist thread).

  • #2128 references the age/grief relationship.

  • #2138 portrays the increasing severity of technological risks as technology advances.

  • #2139 could be read as portraying people's discomfort with rationally analyzing their relationships without applause lights.

  • #2143's last panel is a counterpoint to this OB post.

  • #2144 is on the future as a time of increasing normalcy.

  • #2175 uncritically presents standard free-will confusion.

  • #2184 tries to counter the gerontocracy argument against immortalism.

  • Not sure I get #2186, but it has its own thread.

  • No comment on #2191 or the last panel of #2196..

  • #2203 is yet another(?) simulation scenario.

  • #2204 relates to Moravec's paradox and the third observation here.

  • #2211 uncritically presents standard free-will and quantum-brain confusion.

  • #2236 has eternally static human lifespan as a background assumption; so does #532 to a lesser extent.

  • #2286 portrays an almost-Friendly optimization process, and has a thread here.

  • #2289 is on the inexhaustibility of fun and/or its converse.

  • #2290 mocks the absurd idea that medical expert systems could be effective.

  • #2298 mocks the failures and/or anti-epistemologies of mainstream academic philosophy.

  • No comment on the argument for mortalism in #2299.

  • #2300 warns against a particular type of pseudorationality.

  • No comment on #2312.

  • #2398 is on naively extrapolating exponential trends.

  • #2401 presents a paradox of causal decision theory.

  • #2418 involves prediction markets.

Older comics:

I recently started a theoretically humorous webcomic about the Singularity entitled Singuhilarity. It's poorly drawn and a little rough in places, but it touches on a number of lesswrong-type subjects as well as some pretty standard science fiction tropes.

Here's the first comic and here's the latest.

I just read the archive and was amused several times. You should continue this project.

I don't think I can ask for anything better than being amused several times. Thanks. I will indeed continue the project.

Followup to previous comment: I feel like this link from Reddit may apply.

[-][anonymous]00

Read first comic, said to self "This is terrible" halfway through, didn't read further. There may be room for improvement.

May be room for improvement? Well that's an understatement. ;)

The Big Bang Theory, Episode 402.

Sheldon (the most socially atypical character on a show full of them) plans a program of life-extension so that he will last until the Singularity, which he projects to occur around 2060 and chiefly involve the uploading of human consciousness into machines (his roommate Leonard describes the latter as become a "freakish self-aware robot". By the end of the episode Sheldon seems to have given up on the plan as too inconvenient/inadvertently dangerous.

Discussed in more detail by Greg Fish here.

T-Rex's birthday is tomorrow which happens to be MY birthday as well! Unlike T-Rex, however, I am not all emo about aging AND also unlike T-Rex, I have discovered a way to live forever: I will give you a hint, it involves liquid nitrogen and the boundless expanse of interstellar space and also entropy reversing somehow

From the Dinosaur Comics news post for 2010 October 19.

Has anyone claimed saving the world yet?

-- Ryan North, Twitter

I don't feel like starting/finding a conversation elsewhere about the comic, but for the record, I'm still unconvinced by the arguments I've heard against quantum (or modal-realist, or eternal-recurrence) immortality. (I haven't read the paper linked here, though.) I realize few of the "me"s that would result from that kind of transition would have much in common with me-today, but I think I can live with that. It's harder to live with the fact that a lot of me will be as badly off as factory-farmed animals or worse, but there's not much I can do about that beyond trying to reduce the measure of conditions like that in general, which I have limited ability and will to do.

I also hold out hope for some kind of repeated quantum suicide for "free" energy after we run out, or (slightly more dubiously) a Permutation City scenario.

I'm not particularly optimistic about unknown physics, or (edit 10/22) convincing the simulators to let us out, and (edit 11/25) the Omega Point is of course bunk.

Today's xkcd: Future Timeline.

Edits:

  • #888: uncritically-presented bad Fun Theory.
  • #893: uncritically-presented straw-Vulcan argument for space travel.
  • #894 references the increasingly visible progress being made in narrow AI.

Questionable Content mentioned the singularity in passing circa last night. The phrase "according to the Internet" made me think that there was some particular exaggerated article or press release making the rounds that it was referring to, but I couldn't find it and I was encouraged to refrain from asking directly.

And now it's mentioned Friendly AI directly. Has Jeph Jacques been reading Eliezer?

Tangential, but this old XKCD is essentially a demonstration of the importance of Friendly AI.

I think that the poster in question was assuming that you were unfamiliar with the Singularity in general, rather than enquiring as to the nature of the Singularity that occurred in-comic in particular.

Or, possibly, that you were silly enough to confuse the QC world with our own; they've had Strong AI since the start of the comic, after all, a superhero who delivers pizzas, and one one the cast grew up on a space station. Needless to say, it only appears similar to ours since we're just seeing the lives of a small circle of hipsters who run a coffee shop and an office-bitch-turned-librarian. I'd imagine that, say, their US Military probably looks quite different to ours.

they've had Strong AI since the start of the comic

Incidentally, I only just noticed that the latest comic's title is They've Had AI Since 1996. IIRC there was a calendar shown at one point implying(?) that it was 2004, but that's probably contradicted elsewhere, even accounting for transplanted pop culture.

Elsewhere in the comic: How did sentient machine intelligence come about?.

[-][anonymous]00

they've had Strong AI since the start of the comic

Incidentally, I only just noticed that the latest comic's title is They've Had AI Since 1996. IIRC there was a calendar shown at one point implying(?) that it was 2004, but that's probably contradicted elsewhere, even accounting for transplanted pop culture.

Elsewhere in the comic: How did sentient machine intelligence come about?.

I'm not one of the posters in that thread.

Ah. You sort of implied that you were. No worries, then.

Cracked.com's Jason Iannone, on the game-changing nature of AGI versus its portrayal in fiction:

Cortana, the Chief's AI sidekick [in Halo] [...] can control entire cities by remotely breaking into their battle nets and taking over their weaponry, without anyone ever setting foot in the area. Which is another way of saying that she really doesn't need the Chief at all.

[...] Master Chief, for all his badassitude, is really just a grunt whose life is being put unnecessarily at risk, although a video game featuring him doing nothing but sitting around eating tacos in the cafeteria while Cortana does all the work would probably not have sold as well.

From 6 Video Game Heroes Made Useless By Supporting Characters. High Challenge is relevant to the closing point.

Why the Singularity isn't going to happen on io9.com. Does that count as "in the culture at large"? Anyway, it's a really really weak article (apparently typical of io9) based on the idea that "singularity-level" technologies in the past (a misunderstanding in itself), such as the industrial revolution, didn't lead to the paradise on earth that some people said they would. The link between "it won't be that great" and "it won't even happen" is never bridged. Could be summed up as saying "it's geek religion, man".

Reddit comment thread in progress:

depending on when exactly we achieve this, this could be the best time to be born ever, because it will be the absolute earliest anybody will have achieved immortality. Someone born within 20 years of this moment could one day be the oldest human, sentient, or even living being in the Universe.

The comments are currently split between arguing and agreeing with this. So far, no mention of cryonics. One post presents a technical argument that "our current knowledge/technology is centuries away" from mind uploading/whole-brain emulation.

Machine of Death, a recently released short-fiction anthology that hit #1 on Amazon, has a cover quote from Cory Doctorow containing the sentiment "Makes me wish I could die, too!".

In one of the stories, "Flaming Marshmallow", "zvyyraavhz fcnpr ragebcl" vf vagrecergrq nf zrnavat gur crefba jba'g qvr gvy gur arkg zvyyraavhz, juvpu vf frra nf tbbq arjf engure guna gur greevoyr arjf vg zvtug or sbe fbzrbar rkcrpgvat cebcre vzzbegnyvgl gb unccra ol gura.

So far I haven't seen any of the stories deal with how the machine affects the cryonics movement (start testing rats and then putting them in suspension, and try to either fool the machine or get a prediction that doesn't match the cause of deanimation?) or physics (strong evidence against many-worlds?), or how it deals with mind copying (does it arbitrarily distinguish between the "original" and copies, thereby giving people a loophole to escape their deaths, or do all the copies end up dying in similar ways?), merging or reconstruction, or whether there's a large-scale study of the effects of praying and making sacrifices to the obviously intelligent Predictor. On the other hand I've only just started the book, and there's already been talk of another volume, so I hold out some hope.

The anthology's concept seems to rule out an imminent positive singularity, since you'd expect to have over a century's advance knowledge when predictions started reading "end of the universe"; but I wonder if an FAI could make a deal with the Predictor by precommitting to kill everyone in their appointed ways just before the universe ended. If the prediction required a person to be dead by a particular date, the AI would want to figure out how much of a person could be preserved while still having the Predictor consider them dead. Or maybe the Prediction Enforcer kills them in their appointed way heedless of the FAI, and the only difference the AI makes is in making it impossible for the Enforcer to disguise its actions as the arbitrary happenstance of life. (Back in 2006 I figured that was the only real possibility, since information going backward in time allows paradoxes; but now I see both as plausible.)

OT: I'm also curious what happens when you mix multiple people's blood. Or if you developed the technology to transplant brains between bodies - does the prediction follow the body or the brain? What if you could clone a brainless body and test its blood before and after installing a brain? More generally, by what method does the machine decide who a particular blood sample "belongs" to a particular person, and what are the edge cases and ambiguities of that method?

The editors are currently accepting submissions for Volume 2 (deadline July 15), making me wish I had a story to submit exploring some of those questions.

The AI-related sci-fi series Caprica was recently cancelled, following in the footsteps of The Sarah Connor Chronicles and Odyssey 5. The last five episodes are planned to air sometime in 2011.

I haven't watched the show, but comments I've seen on it include:

Genii Lodus, Stardestroyer.net, after a fairly harsh recap of the pilot:

The only interesting story this series might have had to tell - how Cylons come about has pretty much been done in the pilot- a 16 year old girl developed AI in her spare time outside of school and she was put into a magical plot device processor and plugged into a robot. (snip) I suppose if they'd had an actual progression towards developing true AI then it might well have looked too much like the Sarah Connor Chronicles but given both settings feature the significant plot of man builds AI which goes bad and then nukes man to fuck there would always be comparisons.

JoCoLa, Reddit:

it's taking its time with pretty complex subject matter- every movie or show about A.I. takes place after it was created, this is the only show I know of that addresses how it comes into being.

In another SDnet thread:

...it had the wrong premise. It was "corporate, terrorist and gangster culture with a VR twist" instead of "this is how the cylons came to be."

I've heard similar descriptions elsewhere; that the AI-related parts are there but diluted in frequently non-sci-fi plotlines.

Earlier this year, the Reddit thread "Hey Reddit, how do you think the human race will come to an end?" received a fairly elaborate Singularitarian reply. Although it might be unfair to generalize given the context, he seems to be taking a basically fatalist stance, taking it as given that whoever creates AIs will program them to share our values.

The post currently has about 1165 karma (not necessarily representing votes of agreement), plus at least a few hundred replies taking probably every major position on the subject. This reply links to Robin Hanson's article If Uploads Come First.

I was disappointed that this subthread ended with the rhetorical question "What current AI projects aren't working with brains?" unanswered.

(Edit 11/25: corrected the assumption that the number of upvotes displayed by the script I had installed was accurate.)

The half-hour TV series "Sci Fi Science: Physics of the Impossible" recently aired an episode on AI risk, "A.I. Uprising". This is the airing schedule. It doesn't seem to be available online yet.

Another show, "10 Ways", aired an episode "10 Ways the World Might End". I haven't watched it either, but apparently "Invasion of Grey Goo" and "Robots Inherit the Earth" are two of the ten.

Via Cyan: John Scalzi blogged this semi-humorous story of a dubiously-friendly intelligence explosion, going into more detail than most about the path from intelligence to real-world power.

A 2006 Overcompensating post talks about the absurdity of how widely ignored existential risks are.

[-]sfb00

There's an unpleasant comedy sketch show take on cryogenic preservation here: http://www.youtube.com/watch?v=g7Lzr3cwaPs

Plot: a cryogenics company goes bust, the bodies they store are bought cheaply and revived to make low budget TV with no concerns about treating the revived people badly.

Relevance: What if cryogenic preservation works but the plan of waking up in a better future, doesn't.

[-]dclayh-30

Yesterday's SMBC presents SIAI-style uFAI fears as essentially a self-fulfilling prophecy.

[-]ata70

SIAI-style uFAI fears

That's quite a stretch.

Admittedly the comic seems to assume malevolence rather than the more likely indifference...but it's still a comic about a self-improving superhuman intelligence that destroys humanity.