ArisKatsaris comments on Complexity based moral values. - Less Wrong

-6 Post author: Dmytry 06 April 2012 05:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: ArisKatsaris 06 April 2012 05:48:16PM *  4 points [-]

His morality arose from those laws of physics

Plus the process of a few hundred million years of evolutionary pressures.

Do you think simulating those years and extrapolating the derived values from that simulation is clearly easier and simpler than extrapolating the values from e.g. a study of human neural scans/human biochemistry/human psychology?

Comment author: David_Gerard 06 April 2012 10:21:34PM 2 points [-]

Do you think simulating those years and extrapolating the derived values from that simulation is clearly easier and simpler than extrapolating the values from e.g. a study of human neural scans/human biochemistry/human psychology?

It's not clear to me how the second is obviously easier. How would you even do that? Are there simple examples of doing this that would help me understand what "extrapolating human values from a study of human neural scans" would entail?

Comment author: Dmytry 06 April 2012 06:25:34PM *  1 point [-]

One could e.g. run a sim of bounded intelligence agents competing with each other for resources, then pick the best one, that will implement the tit for tat and more complex solutions that work. It was already the case that for iterated prisoner's dilemma there wasn't some enormous number of amoral solutions, to the much surprise of AI researchers of the time who wasted their efforts trying to make some sort of nasty sneaky Machiavellian AI.

edit: anyhow i digress. The point is that when something is derivable via simple rules (even if impractical), like laws of physics, that should enormously boost the likehood that it is derivable in some more practical way.

Comment author: faul_sname 06 April 2012 08:34:03PM *  0 points [-]

Would "yes" be an acceptable answer? It probably is harder to run the simulations, but it's worth a shot at uncovering some simple cases where different starting conditions converge on the same moral/decision making system.