Can empathize with a lot here, but strikes me:
If you go to what is quasi the incarnation of the place where low IQ makes us fail - PHILOSOPHY group - no wonder you end up appalled :-). Maybe next time you go to a pub or anywhere else and despite even lower IQ persons, they may be more insightful or interesting as their discussions are ones that benefit from a broader spectrum of things than sheer core IQ.
Warning: more for imho beautiful geeky abstract high-level interpretation than really resolving with certainty the case at hand.
:-)
I purposely didn't try to add any conclusive interpretation of it in my complaint about the bite-its-tail logic mistake.
But now that we're here :-):
It's great you did the 'classical' (even if not named as such) mistake so explicitly, as even if you hadn't made it, somehow the two ideas would have easily swung along with it in many of us half consciously without being fully resolved, pbly in head too.
Much can be said about '10x times as suspicious'; the funny thing being that as long as you conclude what you now just iterated, it again defeats a bit the argument: as you just prove that with his 'low' bet we may - all things considered - here simply let him go, while otherwise... Leaving open all the other arguments around this particular case, I'm reminded of the following that I think is the pertinent - even if bit disappointing as probabilistic fuzzy - way to think about it. And it will make sense of some of us finding it more intuitive that he'd surely gone for 800k instead of 80k (let's ascribe this to your intuition so far), and others the other way round (maybe we're allowed to call that the 2nd-sentence-of-Dana position), while some are more agnostic - and in a basic sense 'correct':
I think Game Theory calls what we end in a "trembling hand" equilibrium (I might be misusing the terminology as I rmbr more the term than the theory; either way the equil mechanism here then I'd still wager makes sense at a high level of abstraction): A state where if it was clear that 800k would have made more sense for the insider, then he could choose 80k to be totally save from suspicion, and we'd in that world see many '80k-size' type of such frauds, as anyone could pull them off w/o creating any suspicion - well and greedy people with some occasions will always exist. And in the world where instead we assume it was clear that 80k was already perfectly suspect, he would have zero reason to not go all out for the 800k if at all he tries... In the end, we end up with: It's just a bit ambiguous which exact scale increases the suspicious-ness how much, or, put more precisely: it is just such that the increase of suspicious-ness vaguely offsets the increase in payoff in many cases. I.e. it all becomes somewhat probabilistic. We're left with some of the insider thieves sometimes going for the high, sometimes for the low amount, and (i) potentially with many of us fighting about what that particular choice now means as fraud-indicator - while, more importantly, trembling-hand-understanders, or actually maybe many other a bit more calm natures, actually see how little we can learn from the amount chosen, as in equilibrium, it's systematically fuzzy along that dimension. If we'd be facing one single player consistently being insider a gazillion times, he might adopt a probabilistic amount-strategy; in the real world we're facing the one-time or so random insider who has incentives to play high or low amount which may be more explained by nuanced subtleties rather than a simple high-level view on it all - as that high-level-only view merely spits out: probabilistic high or low; or in a single case a 'might roughly just as well play high amount as low amount'.
I don't really claim there cannot be anything much more detailed/specific said here that puts this general approach into perspective in this particular case, but from the little we have here in OP and the comments so far, I think that would reasonably apply.
Disagree. If you earn a few millions or so a year, a few hundred thousand dollars quick and easy is still a nice sum to get for quasi free. Plus not very difficult to imagine that some not extremely high up people likely enough had hints as to what they might be directly involved with soon.
FWIW empirical example: A few years ago the super well regarded head of prestigious Swiss National Bank had to go because of alleged dollar/franc insider trading (executed by his wife via arts) when questions of down-pegging the value of Swiss franc to weaker EUR was a daily question with gains of - if I rmbr well - a few ten thousand dollars or so from the trade.
Note the contradiction in your argumentation:
You write (I add the bracket but that's obviously rather exactly what's meant in your line of argument)
[I think the guy's trade is not as suspicions as others think because] why only bet 80k?
and two sentences later
And I don’t think the argument of “any more would be suspicious” really holds either here, betting $800k or $80k is about as suspicious
I don't see this defeating my point: as a premise, GD may dominate from the perspective of merely improving lives of existing people as we seem to agree; unless we have a particular bias for long lives specifically of the currently existing humans over in future created humans, ASI may not be a clear reason to save more lives, as it may not only make existing lives longer and nicer, but may actually exactly also reduce the burden for creating any aimed at number of - however long lived - lives; this number of happy future human lives thus hinging less on the preservation on actual lives.
If people share your objective, in a positive ASI world, maybe we can create many happy human people quasi 'from scratch'. Unless, of course, you have yet another unstated objective, of aiming to make many unartificially created humans happy instead..
On a high level I think the answer is reasonably simple:
It all depends on the objective function we program/train into it.
And, fwiw, in maybe slightly more fanciful situations, there could also be some sort of evolutionary process between future ASIs that mean only those with a strong instinct for survival/duplication (and/or of killing off competitors?) (and or minor or major improvements) would eventually be the ones being around in the future. Although I could also see this 'based on many competing individuals' view is a bit obsolete with ASI as the distinction between many decentralized individuals and one more unified single unit or so may not be so necessary; that all becomes a bit weird.
I partly have a rather opposite intuition: A (certain type of) positive scenario of ASI means we sort out many things quickly, incl. how to transform our physical resources into happiness, without this capacity being strongly tied to the # of people around by the start of it all.
Doesn't mean yours doesn't hold in any potential circumstances, but unclear to me that it'd be the dominant set of possible circumstances.
I think (i) your reasoning is flawed, though - even if barely anyone will be agreeing to it - (ii) actually have some belief in something related to what you say
(i) YOUR BAYESIAN REASONING IS FLAWED:
As Yair points out, one can easily take a different conclusion from your starting point, and maybe it's best to stop there. Here still an attempt of a Bayesian tracking why; it's all a bit trivial, but maybe its worth could be: if you really believe in the conclusions brought in OP, maybe you can take it as a starting point and pinpoint where exactly you'd argue a Bayesian implementation of the reflection ought to look differently.
Assume we have, in line with your setup, two potential states of the world - without going into detail as to what these terms would even mean:
A = Unified Consciousness
B = Separate Consciousness for each individual
The world is, of course, exactly the same in both cases, except for this underlying feature. So any Joe born in location xyz at date abc will be that exact same Joe born then and there under either of the hypothetical A and B, except for the underlying nature of his consciosusness to differ in the sense of A vs. B.
We know there are
Potential and actual numbers are the same in world case A and world case B, just their consciousness(es) is/are somehow of a different nature.
Let's start with an even prior:
P(Unified Consciousness) = P(Separated Consciousnesses) = 0.5
Now, consider in both hypothetical worlds a random existing human # 7029501952, born to the name of Joe, among the 9 bn existing ones. Joe can indeed ask himself: "Given that I exist - wow, I exist! - how likely is that there is a unified vs. separate... He does the Bayesian update given his evidence at hand. From his perspective
P(A | Joe exist) = P(Joe exist | A) x P(A)/P(Joe exist)
P(B | Joe exist) = P(Joe exist | B) x P(B)/P(Joe exist)
As we're in a bit a weird thought experiment, you may argue to have only one or two of the following possibilities to evaluate this ( think the first makes more sense as we're talking about his perspective, but if you happen to prefer seeing it the other way round; won't change anything):
If you substitute that in you get one of
And the same 0.5 in both cases for P(B | Joe exists).
So, the probability of A and of B remains at 0.5 just as it initially was.
In simplified words - just like the maths also they feel a bit trivial: Given by definition only the existing humans - no matter whether their atomic consciousnesses or somehow one single connected one - exist, and can thus ask themselves about their existence, the fact that they exist despite the many hypothetical humans individually only rarely becoming actual existences, doesn't reduce the probability of them having been born into a world of type B as opposed to type A. I.e., whatever our prior for world A vs. world B, your type of reasoning does not actually yield any changed posterior.
(ii) I THINK UNIFIED CONSCIOUSNESS - IN SOME SENSE - MAKES SORT OF SENSE
FWIW I'm half convinced we can sort of know we're more 'one' than 'separate' as it follows from a observation insight and thought experiment: (a) there's not much more in "us" at any given moment than an instantaneous self and memories and intentions/preferences regarding a future self that happens to be in the same 'body', but (b) it suffices any random selection of a large set of thought experiments about sleeping/cloning/awaking that can show we can very happily imagine ourselves to 'be' an entire different future in the next moment in a way that imho can best be made sense of if there is not really just a stable and well-defined long-term self but instead (either no such thing as any self in any meaningful way, i.e. something a bit illusionist or) a wholly flimsy/random continuation of self, in a way that may well best be described as there being a single self or something (and half esoterically I derive from it I should really better care about everyone's welfare equally well as opposed to mainly about the one of my own physical longer-term being, though it's all fuzzy), as I try to explain in Relativity Theory for What the Future 'You' Is and Isn't.
You mention a few; fwiw some additional things that occasionally increase my empathy to whom I consider of lower abstract intelligence: