What's so bad about morality being a mere human's construct - in other words, the notion that there is no "stone tablet" of morals? In fact, I think the notion that morality exists objectively, like some looming Platonic figure, raises more questions than would be solved by such a condition.
I think the best way to construct this "morality" is just to say that it's got a quasi-mathematical existence, it's axiomic, and it's all augmented by empirical/logical reasoning.
Why accept it, why be moral? I feel the same way about this question as I do about the question of why somebody who believes "if A, then B," and also believes that A, should also believe that B.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
One theory (The Matrix spawned a lot of philosophy talk, and even books) was that, unbeknownst to the machines themselves, they couldn't simply kill off the humans - for ethical reasons. I mean, there are obviously more efficient ways to generate energy, but the robots couldn't kill off their creators - so they came up with this elaborate scheme of harvesting energy from their bodies, and never thinking much about how they were actually losing energy in the process.