ChristianKl comments on A forum for researchers to publicly discuss safety issues in advanced AI - Less Wrong

12 Post author: RobbBB 13 December 2014 12:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 14 December 2014 11:13:03AM *  0 points [-]

that a rationally discoverable set of ethics might not be as sensible notion as it sounds.

That wasn't the point I thought I was making. I thought I was making the point that the idea of tractable sets of moral truths had been sidelined rather than sidestepped...that it had been neglected on the basis of a simplification that has not been delivered.

Having said that, I agree that discoverable morality has the potential downside of being inconvenient to, or unfriendly for , humans: the one true morality might be some deep ecology that required a much lower human population, among many other possibilities. That might have been a better argument against discoverable morality ethics than the one actually presented.

But on the other hand human preference satisfaction seems a really bad goal - many human preferences in the world are awful - take a desire for power over others for example. Otherwise human society wouldn't have wars, torture, abuse etc etc.

Most people have a preference for not being the victims of war or torture. Maybe something could be worked up from that.

CEV is the main accepted approach at MIRI :-( I assumed it was one of many

I've seen comments to the effect that to the effect that it has been abandoned. The situation is unclear.

Comment author: ChristianKl 14 December 2014 11:42:48AM 0 points [-]

the one true morality might be some deep ecology that required a much lower human population, among many other possibilities

Or simply extremly smart AI's > human minds.

Comment author: the-citizen 15 December 2014 05:44:46AM 0 points [-]

Yes some humans seem to have adopted this view where intelligence moves from being a tool and having instrumental value to being instrinsically/terminally valuable. I find often the justifcation for this to be pretty flimsy, though quite a few people seem to have this view. Let's hope a AGI doesn't lol.