Kaj_Sotala comments on [link] Disjunctive AI Risk Scenarios - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
(Responding to the whole paragraph but don't want to quote it all) I would be interested to hear a definition of "AI risk" that does not reduce to "risk of unfriendly outcome" which itself is defined in terms of friendliness aka relation to human morality. If, like me, you reject the idea of consistent, discoverable morality in the first place, and therefore find friendliness to be an ill-formed, inconsistent idea, then it's hard to say anything concrete about AI risk either. If you have a better definition that does not reduce to alignment with human morality, please provide it.
Mapping the problem starts with defining what the problem is. What is AI risk, without reference to dubious notions of human morality?
To start with there's all the normal, benign things that happen in any large scale software project that require human intervention. Like, say, the AGI crashes. Or the database that holds its memories becomes inconsistent. Or it gets deadlocked on choosing actions due to a race condition. The humanity threatening failure mode presume that the AGI, on its first attempt at break-out, doesn't suffer any normal engineering defect failures -- or that if it does then the humans operating it just fix it and turn it back on. I'm not interested in any arguments that assume the latter, and the former is highly conjunctive.
I may have misread your intent, and if so I apologize. The first sentence of your post here made it seem like you were countering a criticism, aka advocating for the original position. So I read your posts in that context and may have inferred too much.
To talk about risk you need to define "bad outcomes". You don't have to define them in terms of morality, but you have to define them somehow.