If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi 'from scratch'. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
On a high level I think the answer is reasonably simple:
It all depends on the objective function we program/train into it.
And, fwiw, in maybe slightly more fanciful situations, there could also be some sort of evolutionary process between future ASIs that mean only those with a strong instinct for survival/duplication (and/or of killing off competitors?) (and or minor or major improvements) would eventually be the ones being around in the future. Although I could also see this 'based on many competing individuals' view is a bit obsolete with ASI as the distinction between many decentralized individuals and one more unified single unit or so may not be so necessary; that all becomes a bit weird.
I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn't mean yours doesn't hold in any potential circumstances, but unclear to me that it'd be the dominant set of possible circumstances.
I think (i) your reasoning is flawed, though - even if barely anyone will be agreeing to it - (ii) actually have some belief in something related to what you say
(i) YOUR BAYESIAN REASONING IS FLAWED:
As Yair points out, one can easily take a different conclusion from your starting point, and maybe it's best to stop there. Here still an attempt of a Bayesian tracking why; it's all a bit trivial, but maybe its worth could be: if you really believe in the conclusions brought in OP, maybe you can take it as a starting point and pinpoint where exactly you'd argue a Bayesian implementation of the reflection ought to look differently.
Assume we have, in line with your setup, two potential states of the world - without going into detail as to what these terms would even mean:
A = Unified Consciousness
B = Separate Consciousness for each individual
The world is, of course, exactly the same in both cases, except for this underlying feature. So any Joe born in location xyz at date abc will be that exact same Joe born then and there under either of the hypothetical A and B, except for the underlying nature of his consciosusness to differ in the sense of A vs. B.
We know there are
Potential and actual numbers are the same in world case A and world case B, just their consciousness(es) is/are somehow of a different nature.
Let's start with an even prior:
P(Unified Consciousness) = P(Separated Consciousnesses) = 0.5
Now, consider in both hypothetical worlds a random existing human # 7029501952, born to the name of Joe, among the 9 bn existing ones. Joe can indeed ask himself: "Given that I exist - wow, I exist! - how likely is that there is a unified vs. separate... He does the Bayesian update given his evidence at hand. From his perspective
P(A | Joe exist) = P(Joe exist | A) x P(A)/P(Joe exist)
P(B | Joe exist) = P(Joe exist | B) x P(B)/P(Joe exist)
As we're in a bit a weird thought experiment, you may argue to have only one or two of the following possibilities to evaluate this ( think the first makes more sense as we're talking about his perspective, but if you happen to prefer seeing it the other way round; won't change anything):
If you substitute that in you get one of
And the same 0.5 in both cases for P(B | Joe exists).
So, the probability of A and of B remains at 0.5 just as it initially was.
In simplified words - just like the maths also they feel a bit trivial: Given by definition only the existing humans - no matter whether their atomic consciousnesses or somehow one single connected one - exist, and can thus ask themselves about their existence, the fact that they exist despite the many hypothetical humans individually only rarely becoming actual existences, doesn't reduce the probability of them having been born into a world of type B as opposed to type A. I.e., whatever our prior for world A vs. world B, your type of reasoning does not actually yield any changed posterior.
(ii) I THINK UNIFIED CONSCIOUSNESS - IN SOME SENSE - MAKES SORT OF SENSE
FWIW I'm half convinced we can sort of know we're more 'one' than 'separate' as it follows from a observation insight and thought experiment: (a) there's not much more in "us" at any given moment than an instantaneous self and memories and intentions/preferences regarding a future self that happens to be in the same 'body', but (b) it suffices any random selection of a large set of thought experiments about sleeping/cloning/awaking that can show we can very happily imagine ourselves to 'be' an entire different future in the next moment in a way that imho can best be made sense of if there is not really just a stable and well-defined long-term self but instead (either no such thing as any self in any meaningful way, i.e. something a bit illusionist or) a wholly flimsy/random continuation of self, in a way that may well best be described as there being a single self or something (and half esoterically I derive from it I should really better care about everyone's welfare equally well as opposed to mainly about the one of my own physical longer-term being, though it's all fuzzy), as I try to explain in Relativity Theory for What the Future 'You' Is and Isn't.
Recent Mechanistic Interpretability (MI) work shows Large Language Models (LLMs) have emotional representations with geometric structure matching human affect. This doesn't prove LLMs deserve moral consideration, but it establishes a necessary condition.
Re "establishes a necessary condition": It seems rather than proving it to be a necessary condition, you assume it to be a necessary condition; while instead, I think we could well imagine that "geometric structures matching human affect" (unless you define that category as so broad that it becomes a bit meaningless) are instead not the only way to sentience i.e. moral consideration.
Agree though more generally WBE can be a useful starting point for thought experiments on AI sentience. Forcing a common starting point for discussion. Although even at that starting point there can be two positions: the usual one that you invoke, plus illusionism (which I personally think is underrated even if I agree it feels so hard to entertain).
I was always slightly suspicious of the claims that we had x (5 or so?) times been closer to entering the big nuclear war than to not entering it. But if this passage is accurate, then the fact that some of the usual claims are so easy to put into perspective would suggest in some communities we are also a bit more affected by sensationalism than I'd have thought. Interesting, thanks!
[tone to be taken with a grain of salt, meant as a proposition but I thought to write it a bit provocatively]
No, the more fundamental problem is: WHATEVER it tells you, you can NEVER infer with anything like certainty whether it's conscious (at least if we agree to mean sentient with conscious). Why do I write such a preposterous thing like I know that you cannot know? Very simple: Presumably we agree that we cannot be certain A PRIORI whether any type of current CPU, with whichever software run on it, can become sentient. If there are thus two possible states of the world,
A. current CPU computers cannot become sentient
B. with the right software run on it, sentience can arise
Then, because once you take Claude and its training method & data, you can perfectly track bit by bit why it spits out its sentience-suggestive & deep speek, you know your observations about the world you find yourself in, are just as probable under A as under B! The only Bayesian valid inference then is: Having observed hippy's sentience-suggestive & deep speek, you're just as clueless about whether you're in B. or in A.
tl;dr
Explanation:
Econ 101 model of Monopolistic Competition describes exactly the basic market effect you're going at.
While Econ 101 is though in its most basic form a bit stupid and blind on the more interesting questions you address, when you start from that model and look at effects of market entry, you see that two first-order effects challenge your a priori of there automatically being a reduction of net welfare:
This may be called Econ 102. You find a ton written about it under keyword "business-stealing effect". Conclusion: In some cases free entry has, on the margin at the equilibrium, positive net welfare effects, in some negative.
I'm actually sympathetic that many firms are mostly gaining because of sneakiness that is remote to actual increase of value for customers and society, but it seems to me your post is mixing up (i) fundamental effect that you analyze not in the depth required (the above mentioned additional fundamental basic competition effects) and - in some of your examples later - (ii) the more tricky marketing/sneakiness advantages some profiteer from. These should be separated more clearly for a fruitful discussion of the topic. To be clear what I mean with the latter: Acc. to my experience, a large share of firms mainly tinkers around how to beat competition not in terms of creating better products but in terms of how to better sell it etc., or how to cut costs in gray-zone areas or at least in ways that are not mostly value-adding but instead leading to other negative externalities, so all w/o it necessarily going in the direction of yielding higher value for customer and/or society. And thus also: huge share of bullshit jobs; plus if some jobs aren't full bullshit jobs, they still have a high share of bullshit components (although I guess we should be careful to distinguish within-company bullshittiness and indeed outwardgoing bullshit created, of which only the latter goes in the direction of your argument).
FWIW Tangential topic: Low marginal costs
A related effect I find annoying is that we rely on competition and markets while more and more goods are quasi-artificially-scarce (low marginal cost goods) but with high fixed costs. Independent private business don't want and cannot sell these goods at efficient low prices. If by pure luck we stumbled into a future with a benevolent AI future, we'll have some directed communistic element complementing the market economy. Sadly, otherwise, we'll probably be stuck with highly inefficiently high prices and, yes, maybe with too many firms doing similar things without gains justifying duplication costs.
Agree now turning 40 or 20 need not make a bit difference for those aware of the weirdness of the time.
But: Seems like a stretch to say it's already been like that few decades ago. Now the sheer uncertainty seems objectively different, qualitatively truly incomparable, to 20y ago (well, at least if the immediacy of potential changes is considered too).
I don't see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of - however long lived - lives; this number of happy future human lives thus hinging less on the preservation on actual lives.