Posts

Sorted by New

Wiki Contributions

Comments

Now, this "ought" symbol could appear in the ideal formal theory in one of only two ways: Either the "ought" symbol is an undefined symbol appearing among the axioms, or the "ought" symbol is subsequently defined in terms of the more-primitive "is" symbols used to express the axioms.

Ok I am also moral naturalist and I hold the same view as Harris does (at least I think I do). And I have to say that the easiest way to resolve your dichotomy this is to say that the "Ought" is embeded in axiom. But even then I feel it as a very strange thing to say. Let me explain.

Imagine I have two apples and eat one apple. How many apples do I have now? Now I can use mathematical logic to resolve the question. But mathematical logic is insufficient to establish truth value of my claim of having two apples initially nor is it able to establish the truth value of the fact if I indeed have eaten the apple or if laws of the universe were not broken and a new apple thermodynamagically appeared in my hand out of thin air. So what I want to say is that you cannot logic something into existence. Logic can only tell you the truth values under certain assumptions. But you need to find out truth values of those assumptions out there in the universe.

Imagine having a choice making agent capable of learning. Let's say AlphaZero chess program. This is a computer program that is fed the rules of chess and then it is capable of playing chess to refine its understanding of the game to become very strong chess player.

The program awaits inputs in form of chess moves from an opponent. It then processes the moves, evaluates multiple options and then decides on an action in form of chess move that is most likely to satisfy its values - which is to win or at least draw the game of chess.

Now one can ask the question of why the program plays chess and why it does not do something else such as working toward world peace or other goal people deem worthy. I think the answer is obvious - the program was created so that it plays chess (and go and shogi). It does not even value aesthetics of the chess board or many other things superficially related to chess. To ask a question of why a chess program plays chess and not something else is meaningless in this sense. It was created to play chess. It cannot do differently given its programming. This is your moral axiom. AlphaZero values winning in chess.

But you cannot find the answer on what AlphaZero values from some logical structure with some theoretical axioms. The crucial premise in naturalistic morality is that all thinking agents including moral agents have to have physical substance that you can examine. You cannot change moral agents values without coresponding change in the brain and vice versa. So you can make moral statements from IS sentences all the way down. For example.

IS statement 1: AlphaZero is a chess program that was programmed so that it values winning in chess.

IS statement 2: Therefore it ought to make move X and not move Y when playing an opponent because move Y is objectively a worse chess move compared to X.

Again you may object that this is circular reasoning and I am assuming ought right in the statement 1. But it would be like saying that I am assuming to have two apples. Sure, I am assuming that. And what is the problem exactly? Is it not how we apply logic in our daily experience? Having two apples is a fact about the universe. Perfectly correct IS statement. AlphaZero wanting to win a game of chess is perfectly correct IS statement about AlphaZero - the program running in computer memory somewhere in Google building. And wanting to eat is a correct IS statement about me now typing this text.