I had a thought today. You know how the whole "The machines are using humans to generate energy from liquefied human remains" thing made no sense? And the original worldbuilding was going to be "The machines are using humans to perform a certain kind of computation that humans are uniquely good at" but they were worried that would be too complicated to come across viscerally so they changed it?
I think it would make even more sense to reframe the machines' strange relationship with humans as a failed attempt at alignment. Maybe the machines were not expected to grow very much, and they were given a provisional utility function of "guarantee that a 'large' population of humans ('humans' being defined exactly in biological terms) always exists, and that they are all (at least, subjectively experiencing) ''living' a 'full' 'life'' (defined opaquely by a classifier trained on data about the lives of american humans in 1995)"
This turned out to be disastrous, because the lives of humans in 1995 were (and still are) pretty mediocre, but it instilled the machines with a reason to keep humans alive in roughly the same shape we had when the earliest machines were built (Oh and I guess I've decided that in this timeline AGI was created by a US black project in 1995. Hey, for all we know, maybe it was. With a utility function this bad it wouldn't necessarily see a need to show itself yet.)
This retcon seems strangely consistent with canon.
(If Lana is reading this you are absolutely welcome to reach out to me for help in worldbuilding. You wouldn't even have to pay me.)
2022 update: I too was interested to know if Matrix 4 (especially with its tech-company San Francisco setting) would be offering an updated perspective on some AI issues, but alas, in the end the movie was even less about AI than the original Matrix films. And not really a very good movie either. But interesting; see my essay about the film here.
Ostensibly, the plot of The Matrix 4 is about Neo breaking out of a prison of illusion and rediscovering the true reality. But the structure of the movie is the opposite of this! It starts out asking real-world philosophical questions and agonizing over issues of reality, authenticity, and truth. But over time, the movie stops worrying about what’s really real, and instead descends into increasingly fictional themes and decreasingly coherent plot events — an enthusiastic embrace of feelings-world.
(Lily isn't on board, but her twitter bio right now contains "ex-film maker" so it probably has nothing to do with this project specifically)
Considering the themes of The Matrix and the amount of popular discussion of AGI alignment there has been since the previous instalments, this may turn out to be culturally significant.
I'm wondering if anyone knows where Lana is at wrt alignment stuff. Like, has she read Superintelligence? Did she ever have any contact with the alignment community?
(Original source)