You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

chaosmage comments on A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk - Less Wrong Discussion

34 Post author: chaosmage 07 January 2014 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: chaosmage 08 January 2014 02:13:14PM *  0 points [-]

A Wiki page for an A-list science fiction movie can get 10000 views per day before it is released, will peak immediately after release and then slowly taper off (Example) until it flatlines at around 1000/day (Example). For comparison, the MIRI page gets about 50/day and Technological singularity gets about 2000/day.

So yeah, that'd be an excellent place to link to lukeprog's comment from.

I would expect the Wikipedia page to be tightly monitored by the film's marketers, so any critical comment would have to fully meet the Wiki's relevance criteria in order to survive series of edits and a bunch of us should keep putting it back in if it gets removed.

Comment author: V_V 08 January 2014 04:02:23PM 6 points [-]

Please don't use Wikipedia for advertisement/propaganda.

Comment author: ChristianKl 08 January 2014 04:36:15PM 1 point [-]

There a fine line between propaganda and adding meaningful content that refers the people who read the article to the right resources.

Comment author: David_Gerard 08 January 2014 04:53:05PM 6 points [-]

Wikipedia:Conflict of interest

Please don't do this.

Comment author: ChristianKl 08 January 2014 05:18:39PM *  -2 points [-]

Could you make the case on the basis of utilitarian morals?

By the way, I substantially disagree with the Wikipedia policy as it stands. It prevents me from removing mistakes in cases where I have better information than some news reporter who writes something that's simply wrong. I think citizendium policy on the matter was better.

Comment author: David_Gerard 08 January 2014 05:28:38PM 5 points [-]

Could you make the case on the basis of utilitarian morals?

All spammers can justify spamming to themselves.

I think citizendium policy on the matter was better.

Funnily enough, one of these works and one is dead.

Comment author: ChristianKl 08 January 2014 05:49:00PM 0 points [-]

Funnily enough, one of these works and one is dead.

If you make a claim that Wikipedia works in the sense that it's effectively prevents interested parties from editing articles I think you are wrong.

I think Wikipedia invites interested parties from editing it by providing no ways for interested parties to get errors corrected through open means.

Comment author: [deleted] 08 January 2014 06:32:32PM 3 points [-]

If you make a claim that Wikipedia works in the sense that it's effectively prevents interested parties from editing articles I think you are wrong.

I think he means that Wikipedia unlike Citizendium has managed to create a usable encyclopaedia.

Comment author: ChristianKl 10 January 2014 12:24:45PM 0 points [-]

I think he means that Wikipedia unlike Citizendium has managed to create a usable encyclopaedia.

By making it easy for people to spam it. There are various issues of why Citizendium failed. I'm not claiming that it was overall perfect.

Comment author: ChristianKl 08 January 2014 05:49:44PM *  -2 points [-]

All spammers can justify spamming to themselves.

That no utilitarian argument. I don't see why it should convince me at all.

Take it as a trolly problem. There are important issues where people die and there are issues where one just acts out tribal loyality. In this case I do see no good reason for tribal loyality given what's at stake.

Comment author: Lumifer 08 January 2014 06:02:23PM 3 points [-]

There are important issues where people die

Like attempting to do a PR campaign for a non-profit via Wikipedia by piggybacking onto a Hollywood big-budget movie..?

Comment author: ChristianKl 09 January 2014 12:29:49AM *  2 points [-]

Like attempting to do a PR campaign for a non-profit via Wikipedia by piggybacking onto a Hollywood big-budget movie..?

I do consider the effect of shifting public perception on an existential risk issue by a tiny bit to be worth lives. UFAI is on the road to killing people. I do think you engage into failing to multiply if you think that isn't worth lifes.

Comment author: Lumifer 09 January 2014 12:52:26AM 2 points [-]

I do consider the effect of shifting public perception on an existential risk issue by a tiny bit to be worth lives.

So you are ready to kill people in order to shift the public perception of an existential risk issue by a tiny bit?

Comment author: gjm 09 January 2014 12:47:52PM 0 points [-]

It looks as if you're assuming that the overall PR effect of having MIRI or MIRI supporters add links from the Wikipedia article about Transcendence to comments from MIRI would be positive, or at least that it's more likely to be positive than negative.

I don't think that is a safe assumption.

As David says, one quite likely outcome is that a bunch of people start to see MIRI as spammers and their overall influence is less rather than more.

Comment author: David_Gerard 08 January 2014 09:08:51PM *  2 points [-]

People are not going to die if you refrain from deliberately spamming WIkipedia. There should be a Godwin-like law about this sort of comparison. (That's quite apart from your failure to calculate the damage to MIRI's reputation if they become known as spammers.)

Instead, see if you can get organic coverage going. Can MIRI get press coverage about the issue, if they feel it's to their benefit to do so? (This should probably be something directed from MIRI itself.) Get journalists seriously talking about the Friendly AI issue? Should be able to be swung.

Comment author: ChristianKl 09 January 2014 12:26:05AM 2 points [-]

Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.

If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.

The moral cost you pay by pushing the fat man is higher than the moral cost of violating Wikipedia norms. The benefit of getting the narrative on the article right about AI risk is probably much more valuable than the handful of people you safe in the trolly example.

Comment author: private_messaging 10 January 2014 10:24:29PM 3 points [-]

If it shifts the probability of an UFAI disaster even by 0.001% that equals over a thousands lives saved. It probably a bigger effect than the 5 people who safe by pushing the fat man.

That kind of makes me wonder what would you do in a situation depicted in the movie (and even if you wouldn't, the more radical elements here who do not discuss their ideas online any more would).

There's even a chance that terrorists in the movie are led by an uneducated fear-mongering crackpot who primes them with invalid expected utility calculations and trolley problems.

Having the wrong experts on AI risk cited in the article at a critical junction where the public develops an understanding of the issue can result in people getting killed.

The world's better at determining who the right experts are when conflict-of-interest rules are obeyed.