Previous Posts:
Formal Metaethics and Metasemantics for AI Alignment
New MetaEthical.AI Summary and Q&A at UC Berkeley
This time I tried to focus less on the technical details and more on providing the intuition behind the principles guiding the project. I'm grateful for questions and comments from Stuart Armstrong and the AI Safety Reading Group. I've posted the slides on Twitter.
Abstract: We construct a fully technical ethical goal function for AI by directly tackling the philosophical problems of metaethics and mental content. To simplify our reduction of these philosophical challenges into "merely" engineering ones, we suppose that unlimited computation and a complete low-level causal model of the world and the adult human brains in it are available.
Given such a model, the AI attributes beliefs and values to a brain in two stages. First, it identifies the syntax of a brain's mental content by selecting a decision algorithm which is i) isomorphic to the brain's causal processes and ii) best compresses its behavior while iii) maximizing charity. The semantics of that content then consists first in sense data that primitively refer to their own occurrence and then in logical and causal structural combinations of such content.
The resulting decision algorithm can capture how we decide what to do, but it can also identify the ethical factors that we seek to determine when we decide what to value or even how to decide. Unfolding the implications of those factors, we arrive at what we should do. All together, this allows us to imbue the AI with the necessary concepts to determine and do what we should program it to do.
My aim is to specify our preferences and values in a way that is as philosophically correct as possible in defining the AI's utility function. It's compatible with this that in practice, the (eventual scaled down version of the) AI would use various heuristics and approximations to make its best guess based on "human-related data" rather than direct brain data. But I do think it's important for the AI to have an accurate concept of what these are supposed to be an approximation to.
But it sounds like you have a deeper worry that intentional states are not really out there in the world, perhaps because you think all that exists are microphysical states. I don't share that concern because I'm a functionalist or reductionist instead of an eliminativist. Physical states get to count as intentional states when they play the right role, in which case the intentional states are real. I bring in Chalmers on when a physical system implements a computation and combine it with Dennett's intentional stance to help specify what those roles are.