There is a spinoff thread listing papers cited in the post
https://x.com/_alejandroao/status/2008253699567858001
which speculates that this paper from Google is being referenced
https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
It's interesting that you say you're bad at normie conversation and socializing, and yet, once you decided you couldn't take the people at that meetup seriously, you became the life of the party!
our current reckless trajectory only makes sense if you view it as a gambit to stave off the rest of the things that are coming for us
At this point, I think the AI race is driven by competitive dynamics. AI looks like a path to profit and power, and if you don't reach for it, someone else will. For those involved, this removes the need to even ask whether to do it: it's a foregone conclusion that someone will do it. The only thing I see even putting a dent in this competitive dynamics, is if something happens that terrifies even people like Musk, Trump, and Xi, terrifying enough that they would put aside their differences and truly organize a halt to the race.
If you only have unitary evolution, you end up with superpositions of the form
|system state 1> |pointer state 1> + |systems state 2> |pointer state 2> + ... + small cross-terms
Are you proposing that we ignore all but one branch of this superposition?
my attempt at defining scale-free goodness: the smooth increase in the amount of negentropy in the universe
But how do you define negentropy? If it's just defined as the negative of entropy, then turning the nebulae into bacteria, or even into paperclips, would be good.
There has been a minor advance in professional engagement with Malaney and Weinstein's gauge theory of economics; the economist Robert Murphy had Weinstein on his podcast, and wrote a few blog posts about it. There's nothing formal here, but the informal discussion is essential if one is to understand how gauge theory could be relevant in an economic context, and the same is true if it is going to be successfully imported into decision theory and cognitive science.
Quantum Darwinism reminds me of one part of the Copenhagen catechism, the idea that the quantum-to-classical transition (as we now call it) somehow revolves around "irreversible amplification" of microscopic to macroscopic properties. In quantum Darwinism, the claim instead is that properties become objectively present when multiple observers could agree on them. As https://arxiv.org/abs/1803.08936 points out on its first page, this is more like "inter-subjectivity" than objectivity, and there are also edge cases where the technical criterion simply fails. Like every other interpretation, quantum Darwinism has not resolved the ontological mysteries of quantum theory.
As for this Natural Latents research program, it seems to be studying the compressed representations of the world that brains and AIs form, and looking for what philosophers call "natural kinds", in the form of compressions and categorizations that a diverse variety of learning systems would naturally make.
Cybercrime units must be part of the answer. Cybercrime is constantly evolving anyway, and online is AI's native environment, so that is where it is most likely to perform nefarious deeds.
I would add that crimes committed by agentic AI that is fixed in place, strike me as a far more likely beginning, than an AI escaping or copying itself.
I don't personally see a need for uncertainty about the nature of values to produce additional uncertainty about which values to hold. But then I find the idea of an objective morality that has nothing to do with qualic well-being and decision-making cognition, to be very unmotivated. The only reason we even think there is such a thing as moral normativity is because of moral feelings and moral cognition. There may well be unknown facts about the world which would cause us to radically change our moral priorities, but to cause that change, they would have to affect our feelings and thought in a quasi-familiar way.
So I'd say a large part of the key to precision progress in metaethics, is progress in understanding the nature of the mind, and especially the role of consciousness in the mind, since that remains the weakest link in scientific thought about the mind, which is much more comfortable with purely physical and computational accounts.
As far as the AIs are concerned, presumably there is immense scope for independent philosophical thought by a sufficiently advanced AI to alter beliefs that it has acquired in a nonreflective way. There could even already be an example of this in the literature, e.g. in some study of how the outputs of an AI differ, with and without chain-of-thought-style deliberation.
Kohlberg and Gilligan's conflicting theories of moral development (mentioned in a reply here by @JenniferRM) are an interesting concrete clash about ethics. I think of it in combination with Vladimir Nesov's reply to her recent post on superintelligence, where he identifies some crucial missing capabilities in current AI - specifically sample efficiency and continuous learning. If we could, for example, gain a mechanistic understanding of how the presence or absence of those capabilities would affect an AI's choice between Kohlberg and Gilligan, that would be informative.
There is a recurring model of the future in these circles, according to which the AI race culminates in superintelligence, which then uses its intelligence advantage to impose its values on every part of the universe it can reach (thus frequent references to "the future lightcone" as what's at stake).
The basic mechanism of this universal dominion is usually self-replicating robot probes ("von Neumann machines"), which maintain fidelity to the purposes and commands of the superintelligence, spreading at the maximum possible fraction of lightspeed. It is often further argued that there must be no alien intelligence elsewhere in the universe, because if there was, it would already have launched such a universe-colonizing wave that would control this part of space already. (Thus a version of the Fermi paradox, "where is everybody?")
That there are no alien intelligences is possible in principle, it just requires that some of the numbers in the Drake equation are small enough. It is also possible to have more sophisticated models which do not assume that intelligence leads to aggressive universe colonization with probability 1, or in which there are multiple universe colonizers (Robin Hanson wrote an influential paper about the latter scenario, "Burning the Cosmic Commons").
I don't know the exact history of these ideas, but already in chapter 10 of Eric Drexler's 1986 "Engines of Creation", one finds a version of these arguments.
The idea that individual human beings end up owning galaxies is a version of the "superintelligence conquers the universe" scenario, in which Earth's superintelligence is either subordinate to its corporate creators, or follows some principled formula for assigning cosmic property rights to everyone alive at the moment of singularity (for example). Roko Mijic of basilisk fame provided an example of the latter in 2023. If you believe in Fedorovian resurrection via quantum archeology, you could even propose a scheme in which everyone who ever lived gets a share.
Your two main questions about ownership of distant galaxies (apart from alien rights) seem to be (1) how would ownership be enforced (2) what would the owner do with it? These scenarios generally suppose that the replicating robot fleets which plant their flags all over the universe won't deviate from their prime imperative. It's reasonable to suppose that they would eventually do so, and become a de-facto independent species of machine intelligence. But I suppose digital security people might claim that through sufficiently intense redundancy of internal decision-making and sufficiently strict protocols of mutual inspection, you could reduce the probability of successful defection from the prime imperative to a satisfactorily low number.
If you can swallow that, then Larry Page can have his intergalactic property rights reliably enforced across billions of years and light-years. But what would he do with a galaxy of his own? I think it's possible to imagine things to do, even just as a human being - e.g. you could go sightseeing in a billion solar systems, with von Neumann machines as your chauffeurs and security details - and if we suppose that Larry himself has transcended his humanity and become a bit of a godlike intelligence himself, then he may wish to enact any number of SF scenarios e.g. from Lem or Stapledon.
All this is a staring into the unknown, a mix of trying to stay within the apparent limits implied by physics, while imagining what humans and posthumans would do with cosmic amounts of power. These scenarios have their internal logic, but I think it's unwise to believe too hard in them. (If you take anthropic arguments like the Self Indication Assumption seriously, you can almost deduce that they are incorrect, since it should be very unlikely to find ourselves in such a uniquely privileged position at the dawn of time, though that does not in itself tell you what the wrong ingredient is.)
Some years back I proposed here, to little effect, that it would be wiser to take as one's baseline scenario, just that the spawn of Earth will spread out into this solar system and use its resources in a transhuman way. That's already radical enough and it doesn't make assumptions about cosmic demography or the long-term future.
https://www.lesswrong.com/posts/czxjKohS7RQjkBiSD/thinking-soberly-about-the-context-and-consequences-of