Yesterday I sat down with Lukeprog for a few hours and we produced this ten-page interview for the Singularity Institute blog. This interview contains information about the Singularity Institute's technical research program and recent staff changes that hasn't been announced anywhere else! We hope you find it informative.
Related to: List of public drafts on LessWrong
So we have a mysterious process that with some deviations has generally over time made values more like those that we have today. Looking back at the steps of change we get the feeling that somehow this looks right.
Very well, considering that we here at LW should be particularly familiar with the power of the human mind to construct convincing narratives for nearly any difficult to predict sequence of events in hindsight and considering that we know of biases that are strong enough to give us that "morally superior for reasons beyond simply they are different" feeling (halo effect for starters) and do indeed give us such feelings on some other matters, I hope I am not to bold to ask...
how exactly would you distinguish the universe in which we live in from the universe in which human moral change was determined by something like a random walk through value space? Now naturally a random walk through value space dosen't sound like something to which you are willing to outsource future moral and value development. But then why is unknown process X that happens to make you feel sort of good, because you like what its done so far, something which inspires so much confidence that you'd like a godlike AI to emulate (quite closely) its output?
Sure its better in the Bayesian sense than a process who's output so far you wouldn't have liked, but we don't have any empirical comparison of results to an alternative process, or do we? Also consider other changes, that feel right in the merely because its more similar to us way. It seems plausible that these kinds of changes of values and morality might indeed be far more common. Even if these changes are something that our ancestors would have found to be neutral changes (which seems highly doubtful), they are clearly hijacking our attention away from the fuzzy category of "right in a qualitative different way than just similar to my own" that is put as the basis of going with the current process.
But again perhaps I simply feel discomforted by such implicit narratives of moral progress considering that North Korean society has demonstrably constructed a narrative with itself at the apex that feels just as good from the inside as does ours. Considering similar comments of mine have been up voted in the past, I think at the very least a substantial minority agrees with me that the standard LW discourse and state of thought on this matter is woefully inadequate. I mean how is it possible that this process apparently inspires such confidence in LWers, while a process that has so far also given us comparably felicitous change that feels so right to us humans that we often invoke an omnipotent benevolent agent to explain the result, can terrify us once we think about it clear-headedly?
I have a hunch that if we looked at the guts of this process we may find more old sanity shattering Outer Gods waiting for us.
PS Would anyone be interested in a top level/discussion post of some of my more advanced thoughts and arguments on this? Or have I just been sloppy and missed relevant material that covered this? :)
Edit: This comment was adapted as an article for More Right, where I will be writting a full sequence on my thoughts on metaethics.
Why doesn't a parallel argument apply to material and scientific progress?
Presumably because it is possible to objectively assess the degree of material and scientific progress (whether they are good is another matter). We can tell that our current knowledge is better because we can say why it is better. If there were no no epistemological progress, LW would be in vain!
So presumably the argument that there is no moral progress hinges on morality being something that can't be objectively arrived at or verified. But examples of rational discussion of morali... (read more)