You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Larks comments on Politics Discussion Thread February 2013 - Less Wrong Discussion

1 Post author: OrphanWilde 06 February 2013 09:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: Larks 07 February 2013 09:56:49PM 4 points [-]

The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI

If an AGI wants you to listen, you won't have any choice. If it doesn't want you to listen, you won't have the option. The set of "problems for us after we get FAI" is the null set.

Comment author: wedrifid 08 February 2013 04:33:38AM 0 points [-]

The set of "problems for us after we get FAI" is the null set.

Kind of, almost. It could be that we (implicitly) choose to have problems for ourselves.

Comment author: [deleted] 08 February 2013 04:45:30AM 1 point [-]

It could be that we (implicitly) choose to have problems for ourselves.

In case it's not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.

Comment author: wedrifid 08 February 2013 04:52:20AM 0 points [-]

In case it's not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.

(Or 'choosing not to intervene to solve all problems'. The difference matters to some, even if it is somewhat arbitrary.)

Comment author: Rukifellth 07 February 2013 11:49:10PM 0 points [-]

Are you saying that an AGI would distribute relevant information to the public, compelling them to make sound political choices?

Comment author: Desrtopa 08 February 2013 02:41:32AM 0 points [-]

That doesn't sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.

As a general rule, I'd say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.

Comment author: ikrase 08 February 2013 11:17:08AM 1 point [-]

I thought he was saying that once you have a Super AI, you don't have to deal with politics.

Comment author: Desrtopa 08 February 2013 02:35:48PM 0 points [-]

That doesn't sound like something I'd infer from his previous comment

We've got to deal with politics eventually. The whole world isn't going to listen to the Singularity Institute just because they've got a Friendly AI, and it's not like those cognitive biases will disappear by that time.