You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Do Earths with slower economic growth have a better chance at FAI? - Less Wrong Discussion

30 Post author: Eliezer_Yudkowsky 12 June 2013 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 13 June 2013 09:04:18AM 1 point [-]

Socialist economic policies, perhaps yes. On the other hand, full-blown socialism...

How likely would a socialist government insist that its party line must be hardcoded into the AI values, and what would be the likely consequences? How likely would the scientists working on the AI be selected by their rationality, as opposed to their loyalty to regime?

Comment author: AlexMennen 13 June 2013 09:44:17AM 2 points [-]

How does anything in my comment suggest that I think brutal dictatorships increase the chance of successful FAI? I only mentioned socialist economic policies.

Comment author: Viliam_Bur 13 June 2013 09:57:25AM *  2 points [-]

I don't think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me).

Note: I also didn't downvote your comment - because I think it is reasonable - so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that.

This said, I don't think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.

Comment author: AlexMennen 13 June 2013 01:10:10PM *  0 points [-]

If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn't exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.