If you ran this through an LLM, and asked for e.g. a summary suitable for a typical liberal arts graduate, or a one-paragraph summary suitable for someone with an eighth-grade reading level... maybe you'd even get something useful for most denizens of America's polarized and propagandized political and cultural landscape!
CEV is not meant to depend on the state of human society. It is supposed to be derived from "human nature", e.g. genetically determined needs, dispositions, norms and so forth, that are characteristic of our species as a whole. The quality of the extrapolation process is what matters, not the social initial conditions. You could be in "viatopia", and if your extrapolation theory is wrong, the output will be wrong. Conversely, you could be in a severe dystopia, and so long as you have the biological facts and the extrapolation method correct, you're supposed to arrive at the right answer.
I have previously made the related point that the outcome of CEV should not be different, whether you start with a saint or a sinner. So long as the person in question is normal Homo sapiens, that's supposed to be enough.
Similarly, CEV is not supposed to be about identifying and reconciling all the random things that the people of the world may want at any given time. It is supposed to identify a value system or decision procedure which is the abstract kernel of how the smarter and better informed version of the human race would want important decisions to be made, regardless of the details of circumstance.
This is, I argue, all consistent with the original intent of CEV. The problem is that neither the relevant facts defining human nature, nor the extrapolation procedure, are known or specified with any rigor. If we look at the broader realm of possible Value Extrapolation Procedures, there are definitely some "VEPs" in which the outcome depends crucially on the state of society, the individuals who are your prototypes, and/or even the whims of those individuals at the moment of extrapolation.
Furthermore, it is likely that individual genotypic variation, and also the state of culture, really can affect the outcome, even if you have identified the "right" VEP. Culture can impact human nature significantly, and so can genetic variation.
I think it's probably for the best that the original manifesto for CEV, was expressed in these idealistic terms - that it was about extrapolating a universal human nature. But if "CEV theory" is ever to get anywhere, it must be able to deal with all these concrete questions.
(For examples of CEV-like alignment proposals that include dependence on neurobiological facts, see PRISM and metaethical.ai.)
I notice that I am confused. In what sense would, say, Larry Page or whoever own distant galaxies?
There is a recurring model of the future in these circles, according to which the AI race culminates in superintelligence, which then uses its intelligence advantage to impose its values on every part of the universe it can reach (thus frequent references to "the future lightcone" as what's at stake).
The basic mechanism of this universal dominion is usually self-replicating robot probes ("von Neumann machines"), which maintain fidelity to the purposes and commands of the superintelligence, spreading at the maximum possible fraction of lightspeed. It is often further argued that there must be no alien intelligence elsewhere in the universe, because if there was, it would already have launched such a universe-colonizing wave that would control this part of space already. (Thus a version of the Fermi paradox, "where is everybody?")
That there are no alien intelligences is possible in principle, it just requires that some of the numbers in the Drake equation are small enough. It is also possible to have more sophisticated models which do not assume that intelligence leads to aggressive universe colonization with probability 1, or in which there are multiple universe colonizers (Robin Hanson wrote an influential paper about the latter scenario, "Burning the Cosmic Commons").
I don't know the exact history of these ideas, but already in chapter 10 of Eric Drexler's 1986 "Engines of Creation", one finds a version of these arguments.
The idea that individual human beings end up owning galaxies is a version of the "superintelligence conquers the universe" scenario, in which Earth's superintelligence is either subordinate to its corporate creators, or follows some principled formula for assigning cosmic property rights to everyone alive at the moment of singularity (for example). Roko Mijic of basilisk fame provided an example of the latter in 2023. If you believe in Fedorovian resurrection via quantum archeology, you could even propose a scheme in which everyone who ever lived gets a share.
Your two main questions about ownership of distant galaxies (apart from alien rights) seem to be (1) how would ownership be enforced (2) what would the owner do with it? These scenarios generally suppose that the replicating robot fleets which plant their flags all over the universe won't deviate from their prime imperative. It's reasonable to suppose that they would eventually do so, and become a de-facto independent species of machine intelligence. But I suppose digital security people might claim that through sufficiently intense redundancy of internal decision-making and sufficiently strict protocols of mutual inspection, you could reduce the probability of successful defection from the prime imperative to a satisfactorily low number.
If you can swallow that, then Larry Page can have his intergalactic property rights reliably enforced across billions of years and light-years. But what would he do with a galaxy of his own? I think it's possible to imagine things to do, even just as a human being - e.g. you could go sightseeing in a billion solar systems, with von Neumann machines as your chauffeurs and security details - and if we suppose that Larry himself has transcended his humanity and become a bit of a godlike intelligence himself, then he may wish to enact any number of SF scenarios e.g. from Lem or Stapledon.
All this is a staring into the unknown, a mix of trying to stay within the apparent limits implied by physics, while imagining what humans and posthumans would do with cosmic amounts of power. These scenarios have their internal logic, but I think it's unwise to believe too hard in them. (If you take anthropic arguments like the Self Indication Assumption seriously, you can almost deduce that they are incorrect, since it should be very unlikely to find ourselves in such a uniquely privileged position at the dawn of time, though that does not in itself tell you what the wrong ingredient is.)
Some years back I proposed here, to little effect, that it would be wiser to take as one's baseline scenario, just that the spawn of Earth will spread out into this solar system and use its resources in a transhuman way. That's already radical enough and it doesn't make assumptions about cosmic demography or the long-term future.
There is a spinoff thread listing papers cited in the post
https://x.com/_alejandroao/status/2008253699567858001
which speculates that this paper from Google is being referenced
https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
It's interesting that you say you're bad at normie conversation and socializing, and yet, once you decided you couldn't take the people at that meetup seriously, you became the life of the party!
our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us
At this point, I think the AI race is driven by competitive dynamics. AI looks like a path to profit and power, and if you don't reach for it, someone else will. For those involved, this removes the need to even ask whether to do it: it's a foregone conclusion that someone will do it. The only thing I see even putting a dent in this competitive dynamics, is if something happens that terrifies even people like Musk, Trump, and Xi, terrifying enough that they would put aside their differences and truly organize a halt to the race.
If you only have unitary evolution, you end up with superpositions of the form
|system state 1> |pointer state 1> + |systems state 2> |pointer state 2> + ... + small cross-terms
Are you proposing that we ignore all but one branch of this superposition?
my attempt at defining scale-free goodness: the smooth increase in the amount of negentropy in the universe
But how do you define negentropy? If it's just defined as the negative of entropy, then turning the nebulae into bacteria, or even into paperclips, would be good.
There has been a minor advance in professional engagement with Malaney and Weinstein's gauge theory of economics; the economist Robert Murphy had Weinstein on his podcast, and wrote a few blog posts about it. There's nothing formal here, but the informal discussion is essential if one is to understand how gauge theory could be relevant in an economic context, and the same is true if it is going to be successfully imported into decision theory and cognitive science.
This description is confusing, but I assume you're talking about a process in which decision-making in a human-AI hybrid ends up entirely in the AI part rather than the human part.
It's logical to worry about such a thing because AI is faster than human already. However, if we actually knew what we were doing, perhaps AI superintelligence could be incorporated into an augmented human, in such a way that there is continuity of control. Wherever the executive function or the Cartesian theater is localized, maybe you can migrate it onto a faster substrate, or give it accelerated "reflexes" which mediate between human-speed conscious decision-making and faster-than-human superintelligent subsystems... But we don't know enough to do more than speculate at this point.
For the big picture, your items 1 and 2 could be joined by choice 3 (don't make AI) and non-choice 4 (the AI takes over and makes the decisions). I think we're headed for 4, personally, in which case you want to solve alignment in the sense that applies to an autonomous superintelligence.