If we evolved ethical inhibitions, and then we evolved to be intelligent enough to predict being caught so we'd know not to do it anyway, we'd overcorrect and have to either evolve away the inhibition, or evolve a particular inability to predict being caught. As such, I think the hypothesis that ethical inhibition is due to underestimating the chance of getting caught is clearly more reasonable. Even if we evolved it first, it would have gone away if we weren't underestimating the chance of getting caught.
Today's post, Ethical Inhibitions was originally published on 19 October 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Protected From Myself, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.