One of the most annoying arguments when discussing AI is the perennial "But if the AI is so smart, why won't it figure out the right thing to do anyway?" It's often the ultimate curiosity stopper.
Nick Bostrom has defined the "Orthogonality thesis" as the principle that motivation and intelligence are essentially unrelated: superintelligences can have nearly any type of motivation (at least, nearly any utility function-bases motivation). We're trying to get some rigorous papers out so that when that question comes up, we can point people to standard, and published, arguments. Nick has had a paper accepted that points out the orthogonality thesis is compatible with a lot of philosophical positions that would seem to contradict it.
I'm hoping to complement this with a paper laying out the positive arguments in favour of the thesis. So I'm asking you for your strongest arguments for (or against) the orthogonality thesis. Think of trying to convince a conservative philosopher who's caught a bad case of moral realism - what would you say to them?
Many thanks! Karma and acknowledgements will shower on the best suggestions, and many puppies will be happy.
This doesn't make much sense as stated. Math is a collection of tools for making useful maps of a territory (in the local parlance). The concept of numbers is one such tool. Numbers are not physical objects, they are a part of the model. You cannot add numbers in the physical universe, you can only manipulate physical objects in it. One way to rephrase your statement is "Suppose we lived in a universe where when you combine two peanuts with another two peanuts, you don't get four peanuts". This is how it works for many physical objects in our universe, as well: if you combine two blobs of ink, you get one blob of ink, if you combine one male rabbit and one female rabbit, the number of rabbits grows in time. If the universe you describe is somewhat predictable, it has some quantifiable laws, and the abstraction of these laws will be called "math" in that universe.
The intended meaning of that sentence was that adding two of one thing to two of another thing does not give consistent results, regardless of the things you're adding. Adding two peanuts to two peanuts does not consistently result in any particular quantity of peanuts, and the same is true of any other objects you might attempt to add together.
... (read more)