You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on Politics Discussion Thread February 2013 - Less Wrong Discussion

1 Post author: OrphanWilde 06 February 2013 09:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 08 February 2013 04:38:51AM 3 points [-]

The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI

'Just' because they've got an FAI? Once you have an FAI (and nobody else has a not-friendly-to-you-AI) you've more or less won already.

We've got to deal with politics eventually.

Apart from being able to protect against any political threat (and so make persuasion optional, not necessary) an FAI could, for example, upgrade Eliezer to have competent political skills.

The politics that MIRI folks would be concerned about are the politics before they win, not after they win.

Comment author: Rukifellth 09 February 2013 02:29:08AM 0 points [-]

Work done by Lesswrongians could decrease the workload of such an FAI while providing immediate results. If it takes twenty years for such a thing to be developed, that's twenty years in either direction on the good/bad scale civilization could go. This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.

Comment author: [deleted] 09 February 2013 02:48:28AM 1 point [-]

This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.

You are not taking AI seriously. Is this intentional?

A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn't just rewrite their brains with nano.)

It may not do this, for the sake of our comfort, but if anything was urgent, it would be done.

Comment author: Jack 11 February 2013 06:44:35PM 1 point [-]

A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn't just rewrite their brains with nano.)

While I wouldn't dismiss this possibility at all you seem a little overconfident. The best current marketing tactics can shift market share a percentage point or two or maybe make a half-percentage-point difference in a political campaign. Obviously better than the best is better. But assuming ethical limitations on persuasion tactics and general human suspicion of new things "days" seems pretty optimistic (and twenty-years pessimistic). No good reason to think the persuasive power of marketing is at all linear with the intelligence of the creator. We ought to have very large error bars on this kind of thing and while the focus on these fast take-over scenarios makes sense for emphasizing risk that focus will make them appear more likely to us than they actually are.