Vladimir_Nesov comments on Why safety is not safe - Less Wrong

48 Post author: rwallace 14 June 2009 05:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 15 June 2009 10:30:16AM *  6 points [-]

Here it is again: there is no such requirement that FAI needs to be "ultra safe" and that if it's not, it's unacceptable. This is a strawman. The requirement is that there needs to be any chance at all that the outcome is good (preferably a greater chance). Then, there is a separate conjecture that to have any chance at all, AI needs to be deeply understood.

If you think that being careful is unnecessary, that ad-hoc approach is ready to be used positively, you are not disputing the need for Friendliness in AGI. You are disputing the conjecture that Friendliness requires care. This is not a normative question, this is a factual question.

The normative question is whether to think about consequences of your actions, which is largely decided against or rather dismissed as trivial by far too many people who think they are working on AGI.

Comment author: whpearson 15 June 2009 11:08:19AM *  2 points [-]

I got the impression from, "do the impossible" that Eliezer was going for definitely safe AI and might be safe was not good enough. Edit Oh and the sequence on fun theory suggested that scenarios where humanity just survived, were not good enough either.

I think we are so far away from having the right intellectual framework for creating AI or even thinking about its likely impact on the future, that the ad hoc approach might be valuable for pushing us in the right direction or telling us what the important structure in the human brain is going to look like.

Comment author: Vladimir_Nesov 15 June 2009 11:16:19AM *  3 points [-]

I got the impression from, "do the impossible" that Eliezer was going for definitely safe AI and might be safe was not good enough.

The hypothesis here is that if you are unsure whether AGI is safe, it's not, and when you are sure it is, it's still probably not. Therefore, to have any chance of success, you have to be sure that you understand how the success is achieved. This is a question of human bias, not of the actual probability of success. See also: Possibility, Antiprediction.

I also thought that ad-hoc brings insight, but after learning more I changed my mind.

Comment author: whpearson 15 June 2009 11:51:45AM 0 points [-]

The hypothesis here is that if you are unsure whether AGI is safe, it's not, and when you are sure it is, it's still probably not.

I really didn't get that impression... Why worry about whether the AI will separate humanity if you think it might fail anyway. Surely spend more time making sure it doesn't fail...