Rukifellth comments on Why Politics are Important to Less Wrong... - Less Wrong

6 Post author: OrphanWilde 21 February 2013 04:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (96)

You are viewing a single comment's thread. Show more comments above.

Comment author: Rukifellth 22 February 2013 05:17:41AM 1 point [-]

No, really.

Comment author: RichardKennaway 22 February 2013 02:26:16PM 3 points [-]

Really. That really is what people are expecting of a strong FAI. Compared with us, it will be omniscient, omnipotent, and omnibenevolent. Unlike currently believed-in Gods, there will be no problem of evil because it will remove all evil from the world. It will do what the Epicurean argument demands of any God worthy of the name.

Comment author: Rukifellth 22 February 2013 02:44:20PM 0 points [-]

Are you telling me that if a wFAI were capable of eliminating war, famine and disease, it wouldn't be developed first?

Comment author: RichardKennaway 22 February 2013 06:13:38PM 2 points [-]

Well, I don't take seriously any of these speculations about God-like vs. merely angel-like creations. They're just a distraction from the task of actually building them, which no-one knows how to do anyway.

Comment author: Rukifellth 22 February 2013 06:40:17PM 0 points [-]

But still, if a wFAI was capable of eliminating those things, why be picky and try for sFAI?

Comment author: RomeoStevens 22 February 2013 09:41:25PM *  1 point [-]

Because we have no idea how hard it is to specify either. If, along the way it turns out to be easy to specify wFAI and risky to specify sFAI, then the reasonable course is expected. Doubly so since a wFAI would almost certainly be useful in helping specify a sFAI.

Seeing as human values are a miniscule target, it seems probable that specifying wFAI is harder than sFAI though.

Comment author: Rukifellth 25 February 2013 05:05:53AM 0 points [-]

"Specify"? What do you mean?

Comment author: RomeoStevens 25 February 2013 05:07:58AM 0 points [-]

specifications a la programming.

Comment author: Rukifellth 26 February 2013 05:30:20PM 0 points [-]

Why would it be harder? One could tell the wFAI improve factors that are strongly correlated with human values, such as food stability, resources that cure preventable diseases (such as diarrhea, which, as we know, kills way more people than it should) and security from natural disasters.

Comment author: RomeoStevens 26 February 2013 07:57:13PM 0 points [-]

Because if you screw up specifying human values you don't get wFAI you just die (hopefully).