Basically people tend to value stuff they perceive in the biophysical environment and stuff they learn about through the social environment.
So that reduces the complexity of the problem - it’s not a matter of designing a learning algorithm that both derives and comes to value human abstractions from observations of gas particles or whatever. That’s not what humans do either.
Okay then, why aren’t we star-maximizers or number-of-nation-states maximizers? Obviously it’s not just a matter of learning about the concept. The details of how we get values hooked up to an AGI’s motivations will depend on the particular AGI design but probably reward, prompting, scaffolding or the like.
I don’t think the way you split things up into Alpha and Beta quite carves things at the joints. If you take an individual human as Beta, then stuff like “eudaimonia” is in Alpha - it’s a concept in the cultural environment that we get exposed to and sometimes come to value. The vast majority of an individual human’s values are not new abstractions that we develop over the course of our training process (for most people at least).
There is a difference between the claim that powerful agents are approximately well-described as being expected utility maximizers (which may or may not be true) and the claim that AGI systems will have an explicit utility function the moment they’re turned on, and maximize that function from that moment on.
I think this is the assumption OP is pointing out: “most of the book's discussion of AI risk frames the AI as having a certain set of goals from the moment it's turned on, and ruthlessly pursuing those to the best of its ability”. “From the moment it’s turned on” is pretty important, because it rules out value learning as a solution
There will be future superintelligent AIs that improve themselves. But they will be neural networks, they will at the very least start out as a compute-intensive project, in the infant stages of their self-improvement cycles they will understand and be motivated by human concepts rather than being dumb specialized systems that are only good for bootstrapping themselves to superintelligence.
To be blunt, it's not just that Eliezer lacks a positive track record in predicting the nature of AI progress, which might be forgivable if we thought he had really good intuitions about this domain. Empiricism isn't everything, theoretical arguments are important too and shouldn't be dismissed. But-
Eliezer thought AGI would be developed from a recursively self-improving seed AI coded up by a small group, "brain in a box in a basement" style. He dismissed and mocked connectionist approaches to building AI. His writings repeatedly downplayed the importance of compute, and he has straw-manned writers like Moravec who did a better job at predicting when AGI would be developed than he did.
Old MIRI intuition pumps about why alignment should be difficult like the "Outcome Pump" and "Sorcerer's apprentice" are now forgotten, it was a surprise that it would be easy to create helpful genies like LLMs who basically just do what we want. Remaining arguments for the difficulty of alignment are esoteric considerations about inductive biases, counting arguments, etc. So yes, let's actually look at these arguments and not just dismiss them, but let's not pretend that MIRI has a good track record.
Due partly to the choice of using 'value' as a speaker dependent variable, some of the terminology used in this article doesn't align with how the terms are used by professional metaethicists. I would strongly suggest one of:
1) replacing the phrase "moral internalism" with a new phrase that better individuates the concept.
2) including a note that the phrase is being used extremely non-standardly.
3) adding a section explaining the layout of metaethical possibilities, using moral internalism in the sense intended by professional metaethicists.
In metaethics, moral internalism, roughly, is the disjunction:
'Value' is speaker independent and universally compelling OR 'Value' is speaker dependent and is only used to indicate properties the speaker finds compelling
This seems very un-joint-carvy from a perspective of value allignment, but most philosophers see internalism as a semantic thesis that captures the relation between moral judgements and motivation. The idea is: If someone says something has value, she values that thing. This is very very different from how the term is used in this article.
I can provide numerous sources to back this up, if needed.
I have a few complaints/questions:
1) "What is goodness made out of" is not really a particularly active discussion in professional philosophy. I feel that this was put in there just to make analytic philosophers look silly. And anyways, if one believes in naturalistic moral properties (the stuff that we value,) then "what is goodness made out of" really is the question "what is good," which I think is probably a fine question. In this case, rephrasing in terms of AI just makes philosophical discussions more wordy and less accessible.
2) "Faced with any philosophically confusing issue, our task is to identify what cognitive algorithm humans are executing which feels from the inside like this sort of confusion, rather than, as in conventional philosophy, to try to clearly define terms and then weigh up all possible arguments for all 'positions'."
I don't get what the problem is with clearly defining terms and weighing up pros and cons for positions. Is conceptual analysis (http://philpapers.org/browse/conceptual-analysis) so problematic that it has no place in an improved version of philosophy? I think that there are at least a few parallels between that project in philosophy and the sentiment expressed in https://arbital.com/p/3y6/, for example.
3) "Most "philosophical issues" worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy."
What is "philosophy qua philosophy?"
"This imports the discipline of programming into philosophy. In particular, programmers learn that even if they have an inchoate sense of what a computer should do, when they actually try to write it out as code, they sometimes find that the code they have written fails (on visual inspection) to match up with their inchoate sense. Many ideas that sound sensible as English sentences are revealed as confused as soon as we try to write them out as code."
How would one translate questions like "Are there unverifiable truths?" or "under what conditions does the parthood relation hold?" into AI-speak?
I don’t think perfect surveillance is inevitable.
I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.
Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.