This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. Feel free to rid yourself of cached thoughts by doing so in Old Church Slavonic. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
This post is a continuation of a discussion with Stefan Pernar - from another thread:
I think there's something to an absolute morality. Or at least, some moralities are favoured by nature over other ones - and those are the ones we are more likely to see.
That doesn't mean that there is "one true morality" - since different moral systems might be equally favoured - but rather that moral relativism is dubious - some moralities really are better than other ones.
There have been various formulations of the idea of a natural morality.
One is "goal system zero" - for that, see:
http://rhollerith.com/blog/21
Another is my own "God's Utility Function":
http://originoflife.net/gods_utility_function/
...which is my take on Richard Dawkins idea of the same name:
http://en.wikipedia.org/wiki/God's_utility_function
...but based on Dewar's maximum entropy principle - rather than on Richard's selfish genes.
On this site, we are surrounded by moral relativists - who differ from us on the issue of the:
http://en.wikipedia.org/wiki/Is-ought_problem
I do agree with them about one thing - and it's this:
If it were possible to create a system - driven by self-directed evolution where natural selection played a subsidiary role - it might be possible to temporarily create what I call "handicapped superintelligences":
http://alife.co.uk/essays/handicapped_superintelligence/
...which are superintelligent agents that deviate dramatically from gods utility function.
So - in that respect, the universe will "tolerate" other moral systems - at least temporarily.
So, in a nutshell, we agree about there being objective basis to morality - but apparently disagree on its formulation.
With unobjectionable values I mean those that would not automatically and eventually lead to one's extinction. Or more precisely: a utility function becomes irrational when it is intrinsically self limiting in the sense that it will eventually lead to ones inability to generate further utility. Thus my suggested utility function of 'ensure continued co-existence'
This utility function seems to be the only one that does not end in the inevitable termination of the maximizer.