You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on A forum for researchers to publicly discuss safety issues in advanced AI - Less Wrong Discussion

12 Post author: RobbBB 13 December 2014 12:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 14 December 2014 11:13:03AM *  0 points [-]

that a rationally discoverable set of ethics might not be as sensible notion as it sounds.

That wasn't the point I thought I was making. I thought I was making the point that the idea of tractable sets of moral truths had been sidelined rather than sidestepped...that it had been neglected on the basis of a simplification that has not been delivered.

Having said that, I agree that discoverable morality has the potential downside of being inconvenient to, or unfriendly for , humans: the one true morality might be some deep ecology that required a much lower human population, among many other possibilities. That might have been a better argument against discoverable morality ethics than the one actually presented.

But on the other hand human preference satisfaction seems a really bad goal - many human preferences in the world are awful - take a desire for power over others for example. Otherwise human society wouldn't have wars, torture, abuse etc etc.

Most people have a preference for not being the victims of war or torture. Maybe something could be worked up from that.

CEV is the main accepted approach at MIRI :-( I assumed it was one of many

I've seen comments to the effect that to the effect that it has been abandoned. The situation is unclear.

Comment author: the-citizen 15 December 2014 05:39:33AM *  0 points [-]

Thanks for reply. That makes more sense to me now. I agree with a fair amount of what you say. I think you'd have a sense from our previous discussions why I favour physicalist approaches to the morals of a FAI, rather than idealist or dualist, regardless of whether physicalism is true or false. So I won't go there. I pretty much agree with the rest.

EDIT> Oh just on the deep ecology point, I believe that might be solvable by prioritising species based on genetic similarity to humans. So basically weighting humans highest and other species less so based on relatedness. I certainly wouldn't like to see a FAI adopting the view that people have of "humans are a disease" and other such views, so hopefully we can find a way to avoid that sort of thing.

Comment author: TheAncientGeek 15 December 2014 12:19:33PM *  0 points [-]

I think you have an idea from our previous discussions why I don't think you physicalism, etc, is relevant to ethics.

Comment author: the-citizen 17 December 2014 12:38:01PM *  0 points [-]

Indeed I do! :-)

Comment author: ChristianKl 14 December 2014 11:42:48AM 0 points [-]

the one true morality might be some deep ecology that required a much lower human population, among many other possibilities

Or simply extremly smart AI's > human minds.

Comment author: the-citizen 15 December 2014 05:44:46AM 0 points [-]

Yes some humans seem to have adopted this view where intelligence moves from being a tool and having instrumental value to being instrinsically/terminally valuable. I find often the justifcation for this to be pretty flimsy, though quite a few people seem to have this view. Let's hope a AGI doesn't lol.