I've been planning to try to watch it early, or at least on launch day, and then write up some blog post about it like "Transcendence vs. Superintelligence Theory" or something, to compare it with e.g. the view in Bostrom's forthcoming book (which I've read).
A proposal for a step by step MIRI PR strategy:
(1) : Decide on a person who will speak about the film for MIRI to the public. At best a person who's going to be comfortable in front of a TV camera.
(2) Email the producers of Transcendence. Basically tell them you are the nonprofit MIRI who works on the issue of unfriendly AI risk. You didn't like that Erik Sofge dismissed AI risk in his Popular Science article.
You want to speak to the press about the film, but you want to know what the film is actually about, so it would be nice if the producers of Transcendence would show you the film before the release date. Offer to fly to whatever location in which they might want to show the film. Mention that you will attempt to bring along a journalist for an exclusive story about MIRI reaction to the film.
This proposal should be a no-brainer for someone who produces a film and who wants more PR for the film.
(3) If you have that agreement you message Wired about an exclusive opportunity to cover come along for the screening and cover the reaction of MIRI's spokesperson for it.
Again I think it should be a no-brainer for Wired to send a journalist for such a purpose if you pitch it right. In ca...
Something to be aware of is that, as with the novel Zendegi (which had the "benign superintelligence bootstrap project" and "overpowering falsehood dot com"), there are likely to be some specific allusions to transhumanist communities, although the visual media supports more allusive mechanisms based on sounds and appearances and emotional/social gestalts. The allusions in Zendegi were quite ungenerous. I'm not sure what kind of critical or positive environment would be good in terms of their expected positive or negative world outcomes, but I imagine that being able to respond to them could be important for the organization and for people.
Off the top of my head I can see stuff just in the trailer and brief summary.
The protagonist's name is "Will Caster" which resonates a bit with the way futurists semi-often give themselves names that can function as priming/identity hacks like Max More, Will Newsome, FM-2030, etc.
There will almost certainly be scenes that try to replicate the vibe of a Singularity Summit. The trailer has some "guy with big wavy hair standing on a stage speaking to a packed audience" visuals but I don't know how much ot...
I agree this movie will have an impact. My guess it will be polarizing, which is not the worst thing - right now the area of AI risk suffers from lack of attention more than from a specific opposing opinions. Having these themes enter public memesphere, even through entertainment, seems useful.
As far as MIRI commenting on it I think it's too early and would seem to an intelligent observer as jumping on the bandwagon attention whoring - the movie is not even out yet. I imagine after the release there will be a flurry of PopSci type of articles, at which point weighing it might be appropriate and well received.
[MIRI] would seem to an intelligent observer as jumping on the bandwagon attention whoring
You say that like its a bad thing.
I imagine after the release there will be a flurry of PopSci type of articles, at which point weighing it might be appropriate and well received.
That has to be prepared for (in advance -- that is what preparation is). If a journalist asks MIRI for comment, they need to have a comment ready.
Status wise it's a bad thing.
In what alternate reality? Every prominent politician, and every substantial business or other organisation, has people whose whole job is what you scorn as "attention whoring". It's more usually called something like "publicity", "press department", or "outreach", and I hope MIRI spends a significant number of man-hours on it. Telling people about yourself is a fundamental prerequisite for people knowing about you and whatever cause or business purpose you are trying to pursue. (There are ways of doing this badly, but the surest way of doing it badly is to be resentful at having to do it at all.)
So, MIRI needs to have more than just a comment ready. They need to be able to supply anyone who asks with a whole position paper relating to the film, and where relevant, work references to it into their publicity material, at such time as the actual content of the movie becomes clear. (And there are ways of doing this badly, but etc.)
The journalist might never come knocking, but when opportunity knocks, it is too late to prepare for it. Not doing this for fear of "attention whoring" and people thinking them...
I agree it would be early to comment now, but much gets written during the marketing run-up to release. By that time, not only after release, an easily findable comment can make a lot of impact. Compare how Wikileaks' comments on "The Fifth Estate", published before the release, influenced what got written about that movie.
And whoever writes something pre-release will be findable when people look for somebody to weigh in post-release.
The film's trailer strikes me as being aware of the transhumanist community in a surprising way, as it includes two themes that are otherwise not connected in the public consciousness: uploads and superintelligence. I wouldn't be surprised if a screenwriter found inspiration from the characters of Sandberg, Bostrom, or of course Kurzweil. Members of the Less Wrong community itself have long struck me as ripe for fictionalization... Imagine if a Hollywood writer actually visited.
I suspect THE POWER OF LUVVVVV will come at some point as something only humans can do, as opposed to heartless machines.
Uploads first? It just seems silly to me.
The movie features a luddite group assassinating machine learning researchers - not a great meme to spread around IMHO :-(
Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.
Overall, I think I would have preferred Robopocalypse.
The premise doesn't sounds particularly original:
http://tvtropes.org/pmwiki/pmwiki.php/Main/AIIsACrapshot
http://tvtropes.org/pmwiki/pmwiki.php/Main/BrainUploading
http://tvtropes.org/pmwiki/pmwiki.php/Main/ImmortalityImmorality
"Because if there isn't, they'll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that'd be a shame, wouldn't it?"
I would much rather see someone dismiss the dangers of AI, than misrepresent them, by having a movie in which Johnny Depp plays "a seemingly megalomaniacal AI researcher". This gives the impression that a "mad scientist" type who creates an "evil" AI that takes over the world is what we should worry about...
I agree. Here's a quick brainstormed statement, just to get the ball rolling:
"This film portrays an implausible runaway unfriendly AI scenario, trivializing what is actually a serious issue. For depictions of much more plausible runaway unfriendly AI scenarios, visit [website], where the science behind these depictions is also presented."
Perhaps 'trivializing' is not the best word, as it might make the casual reader (who has absolutely no idea of the real dangers of AI risk) think we're taking ourselves too seriously. Consider this revised statement:
"The film is an entertaining look at a runaway AI scenario. While the film's story is probably implausible, it is plausible that a runaway unfriendly AI scenario could occur in real life. An in-depth discussion of this issue is given on [website]."
With an A-list cast and big budget, I contend this movie is the front-runner to be 2014's most significant influence on discussions of superintelligence outside specialist circles
Probably. It's not entirely true that the public's awareness and concern for asteroid risk was created by Armageddon, but I bet it's 4/10 true.
Bonus link: Could Bruce Willis Save the World?
Mr. Sofge's review says this,
"Being smart doesn’t guarantee malice, or a callous urge to enslave or destroy less-capable beings. Those are human traits, assigned to the idea of intelligent machines with the kind of narcissism only humans can muster."
which sounds like an accusation of typical mind fallacy when we warn that AI may turn unfriendly.
But then he says this,
"The machines become smarter, but not superior. They’re the ultimate intellectuals—far too busy with discourse and theory to even consider something as superfluous as enslaving or supplanting their creators."
which sounds like he doesn't really get what rationality is about.
Previous discussion: http://lesswrong.com/lw/h1b/link_transcendence_2014_a_movie_about/
I started by thinking that one should wait till the film comes out. No I think that's a bad idea. At the point the film comes out relevant journalists will read the articles that are already published. If those contains quotations from a MIRI person, than that person is going to get contact for further interviews.
It might also worth thinking about whether one can place a quote of a MIRI person to the Wikipedia article for the film.
There's a big Hollywood movie coming out with an apocalyptic Singularity-like story, called Transcendence. (IMDB, Wiki, official site) With an A-list cast and big budget, I contend this movie is the front-runner to be 2014's most significant influence on discussions of superintelligence outside specialist circles. Anyone hoping to influence those discussions should start preparing some talking points.
I don't see anybody here agree with me on this. The movie has been briefly discussed on LW when it was first announced in March 2013, but since then, only the trailer (out since December) has been mentioned. MIRI hasn't published a word about it. This amazes me. We have three months till millions of people who never considered superintelligence are going to start thinking about it - is nobody bothering to craft a response to the movie yet? Shouldn't there be something that lazy journalists, given the job to write about this movie, can find?
Because if there isn't, they'll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that'd be a shame, wouldn't it?