Two problems: An obnoxious optimizing process isn't necessarily sentient.
Hence my caveat.
And how much would you really want such a continuation if it say tried to put everything in its future lightcone into little smiley faces?
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
If it helps ask yourself how you feel about human empire that expands through its lightcone preemptively destroying every single alien species before they can do anything with a motto of "In the Prisoner's Dilemma, Humanity Defects!" That sounds pretty bad doesn't it?
Not especially, no.
I find the plausibility of a sentient AGI constrained to such a value to be vanishingly small.
It is one example of what could happen, smileys are but a specific example. (Moreover, this is an example which is disturbingly close to some actual proposals). The size of mindspace is probably large. The size of mindspace that does something approximating what we want is probably a small portion of that.
Not especially, no.
And the empire systematically wipes out human minorities and suppresses new scientific discoveries because they might disrupt stabilit...
Similar to the monthly Rationality Quotes threads, this is a thread for memorable quotes about Artificial General Intelligence.