This is an old post but I can't help myself, I'm a law student and I'm going to explain why this doesn't really hold water. The Anderson test is done to determine whether a particular statute is constitutional or not; showing that a different system would better advance a voter's interests is simply not a part of the analysis. All we're trying to find out is if a particular statute sufficiently considered voting interests. If another system would pass the test, that's great, but all that tells us is that alternative voting systems would be constitutional if they were in force, which they're not.
The same goes for your idea about rational basis review. I have no doubt that alternative voting systems would, if they were enacted, pass rational basis review. That is necessary, but it is not sufficient. What would be sufficient is if those laws were passed instead of the current ones.
There is no room for comparing a current statute to an alternative in rational basis review. A requirement for the government to use the least restrictive means is found in strict scrutiny, not rational basis. If there is any comparison implicit in the rational basis test, it cannot go as far as you're hoping: it definitely changes the test to include a comparison to alternative statutes to see which is more optimal. The statute subject to review does not need to be "rational" in the sense that it is optimal, it needs to be rational in the sense that gets you from A to B. There is a rational basis for me to believe that biking to school will help me get there; it does not matter if I could get there faster by driving. Driving would also pass this rational basis test, but this does not affect the whether my choice to bike to school was a rational choice of means to get there.
I applaud you for your effort, but I would encourage you to pursue this passion in law school -- law lends it self particularly poorly to self teaching.
I think whether or not people change their values is a matter that can be resolved sufficiently by polling them at various ages to see what they think about stuff.
I think you're getting wrapped up in some extraneous details. Natural selection happens because when stuff keeps making itself, there tends to be more it, and evolution occurs as a result. We're going to keep evolving and there's gonna keep being natural selection no matter what. We don't have to worry about it. We can never be misaligned with it, it's just what's happening.
I don't think this line of argumentation is actually challenging the concept of stochastic parroting on a fundamental level. The ability of generative ML to create images or solve math problems or engage in speculation about stories, etc, were all known to the researchers who coined the term; these things you point to, far from challenging the concept of stochastic parrots, are assumed to be true by these researchers.
When you point to these models not understanding how reciprocal relationships between objects work, but apologize for it by reference to its ability to explain who Tom Cruise's mother is, I think you miss an opportunity to unpack that. If we imagine LLMs as stochastic parrots, this is a textbook example: the LLM cannot make a very basic inference when presented with novel information. It only gets this "right" when you ask it about something that's already been written about in its training data many times: a celebrity's mother.
The model is very excellent at reproducing reasoning that it has been shown examples of: Tom Cruise has a mother, so we can reason that his mother has son named Tom Cruise. For your sound example, there is information about how sound propagation works on the internet for the model to draw on. But could the LLM speculate on some entirely new type of physics problem that hasn't been written about before and fed into its model? How far can the model move laterally into entirely new types of reasoning before it starts spewing gibberish or repeating known facts?
You could fix a lot of these problems. I have no doubt that at some point they'll work out how to get ChatGPT to understand these reciprocal relationships. But the point of that critique isn't to celebrate a failure of the model and say it can never be fixed, the point is to look at these edge cases to help understand what's going on under the hood: the model is replicating reasoning it's seen before, and yes, that's impressive, but it cannot reliably employ reasoning to truly novel problem types because it is not reasoning. You may not find that troubling, and that's your prerogative, truly, but I do think it would be useful for you to grapple with the idea that your arguments are compatible with the stochastic parrots concept, not a challenge to them.