dxu comments on A forum for researchers to publicly discuss safety issues in advanced AI - Less Wrong

12 Post author: RobbBB 13 December 2014 12:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 13 December 2014 03:55:33PM *  1 point [-]

MIRI makes the methodological proposal that it simplifies the issue of friendliness (or morality or safety) to deal with the whole of human value, rather than identifying a morally relevant subset. Having done that, it concludes that human morality is extremely complex. In other words, the payoff in terms of methodological simplification never arrives, for all that MIRI relieves itself of the burden of coming up with a theory of morality. Since dealing with human value in total is in absolute terms very complex, the possibility remains open that identifying the morally relevant subset of values is relatively easier (even if still difficult in absolute terms) than designing an AI to be friendly in terms of the totality of value, particularly since philosophy offers a body of work that seeks to identify simple underlying principles of ethics.

The idea of a tractable, rationally discoverable , set of ethical principles is a weaker form of, or lead into, one of the most common objections to the MIRI approach: "Why doesn't the AI figure out morality itself?".

Comment author: the-citizen 14 December 2014 07:39:07AM *  0 points [-]

Thanks that's informative. Not entirely sure your own position is from your post, but I agree with what I take your implication to be - that a rationally discoverable set of ethics might not be as sensible notion as it sounds. But on the other hand human preference satisfaction seems a really bad goal - many human preferences in the world are awful - take a desire for power over others for example. Otherwise human society wouldn't have wars, torture, abuse etc etc. I haven't read up on CEV in detail, but from what I've seen it suffers from a confusion that somehow decent preferences are gained simply by obtaining enough knowledge? I'm not fully up to speed here so I'm willing to be corrected.

EDIT> Oh... CEV is the main accepted approach at MIRI :-( I assumed it was one of many

Comment author: TheAncientGeek 14 December 2014 11:13:03AM *  0 points [-]

that a rationally discoverable set of ethics might not be as sensible notion as it sounds.

That wasn't the point I thought I was making. I thought I was making the point that the idea of tractable sets of moral truths had been sidelined rather than sidestepped...that it had been neglected on the basis of a simplification that has not been delivered.

Having said that, I agree that discoverable morality has the potential downside of being inconvenient to, or unfriendly for , humans: the one true morality might be some deep ecology that required a much lower human population, among many other possibilities. That might have been a better argument against discoverable morality ethics than the one actually presented.

But on the other hand human preference satisfaction seems a really bad goal - many human preferences in the world are awful - take a desire for power over others for example. Otherwise human society wouldn't have wars, torture, abuse etc etc.

Most people have a preference for not being the victims of war or torture. Maybe something could be worked up from that.

CEV is the main accepted approach at MIRI :-( I assumed it was one of many

I've seen comments to the effect that to the effect that it has been abandoned. The situation is unclear.

Comment author: the-citizen 15 December 2014 05:39:33AM *  0 points [-]

Thanks for reply. That makes more sense to me now. I agree with a fair amount of what you say. I think you'd have a sense from our previous discussions why I favour physicalist approaches to the morals of a FAI, rather than idealist or dualist, regardless of whether physicalism is true or false. So I won't go there. I pretty much agree with the rest.

EDIT> Oh just on the deep ecology point, I believe that might be solvable by prioritising species based on genetic similarity to humans. So basically weighting humans highest and other species less so based on relatedness. I certainly wouldn't like to see a FAI adopting the view that people have of "humans are a disease" and other such views, so hopefully we can find a way to avoid that sort of thing.

Comment author: TheAncientGeek 15 December 2014 12:19:33PM *  0 points [-]

I think you have an idea from our previous discussions why I don't think you physicalism, etc, is relevant to ethics.

Comment author: the-citizen 17 December 2014 12:38:01PM *  0 points [-]

Indeed I do! :-)

Comment author: ChristianKl 14 December 2014 11:42:48AM 0 points [-]

the one true morality might be some deep ecology that required a much lower human population, among many other possibilities

Or simply extremly smart AI's > human minds.

Comment author: the-citizen 15 December 2014 05:44:46AM 0 points [-]

Yes some humans seem to have adopted this view where intelligence moves from being a tool and having instrumental value to being instrinsically/terminally valuable. I find often the justifcation for this to be pretty flimsy, though quite a few people seem to have this view. Let's hope a AGI doesn't lol.