Vladimir_Nesov comments on (One reason) why capitalism is much maligned - Less Wrong

1 Post author: multifoliaterose 19 July 2010 03:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 19 July 2010 02:36:42PM *  6 points [-]

I can't imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.

SIAI has a higher risk of producing uFAI than your average charity.

Comment deleted 19 July 2010 03:47:55PM [-]
Comment author: Vladimir_Nesov 19 July 2010 03:59:35PM 4 points [-]

They could be dangerously deluded, for example, even if their aim is right. Currently, I don't believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.

Comment author: FAWS 19 July 2010 03:59:02PM *  3 points [-]

Maybe FAI is impossible, humanity's only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?

Comment author: Vladimir_Nesov 19 July 2010 04:05:03PM *  3 points [-]

Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)

Comment author: FAWS 19 July 2010 04:10:17PM 0 points [-]

(Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity grow up for itself.)

Could you rephrase that? I have no idea what you are saying here.

Comment author: Vladimir_Nesov 19 July 2010 04:14:34PM *  5 points [-]

FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it's in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.

Comment author: FAWS 19 July 2010 04:20:41PM 2 points [-]

Thank you, now I see how the short version says the same thing, even though it sounded like gibberish to me before. I think I agree.

Comment deleted 19 July 2010 04:04:13PM *  [-]
Comment author: Vladimir_Nesov 19 July 2010 05:11:54PM *  1 point [-]

Now what kind of civilized rational conversation is that?

Comment deleted 19 July 2010 03:45:00PM *  [-]