As I see it, the central path to 50 trillion supergeniuses thinking at 10 000x speed in the asteroid belt passes through a scenario of comparable difficulty to 50 million geniuses at 10x speed in datacentres.
It's a relevant scenario because we're almost certainly going to face it soon, and it involves AI capable enough that it's highly believable that we would lose control (and/or life). The millions of instances and moderate speed increase over human aren't arbitrary numbers either, they're roughly the ratios we get from existing plans.
If we can't deal with the country-level scenario, it makes no difference whether or not we can deal with trillions of offworld supergeniuses. If AI genius countries do actually cooperate with us, that's a huge step toward being able to consider whether we - and they - could or should allow the latter scenario.
The country-level scenario could be bypassed by a recursive self-improving singleton that does find some weird tricks, but many of the solutions to the sorts of problems we would face with fast geniuses in datacentres should also help prevent runaway RSI singletons.
Would it be useful to consider how much of current progress has been due to algorithmic improvements rather than compute capacity? It seems that the trend so far has been that a significant proportion of capability improvement has been software-based, though whether that can continue for many more orders of magnitude is certainly debatable.
I think you're making a major false generalization from Newcombe's problem, which is not acausal. Information flows from Omega to your future directly, and you know by definition of the scenario that Omega can perfectly model you in particular.
In acausal reasoning there are no such information flows.
From later paragraphs it appears that you are not actually talking about an acausal scenario at all, and should not use the term "acausal" for this. A future superintelligence in the same universe is linked causally to you.
There are utility arguments around being a highly productive individual forever, regardless of whether you want to insert the word "emotional appeal" or not. Or even being a not very highly productive individual forever.
I haven't seen anyone arguing that users giving generous permissions to Claude Code are going to doom humanity.
It can and probably will mean that whatever system they're giving permissions on is going to be trashed. They should also expect that whatever information is within that system will be misused at some point, including made available to the Internet in general and every bad actor in it in particular.
It's reasonable to put it in a system that you don't care about, and control exactly what personal information you put into that system regardless of any permission settings. I don't just mean a VM within a system that you depend upon either, because high-level coding agents are already nearly as good as human security experts at finding exploits and that will only get worse.
"because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you." Please can you elaborate? Which definition are you using?
Acausal means that no information can pass in either direction.
"you and it are utterly insignificant specks in each other's hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison." Why? I would really rather not believe this particular hypothesis!
That part isn't a hypothesis, it's a fact based on the premise. Acausality means that the simulation-god you're thinking of can't know anything about you. They have only their own prior over all possible thinking beings that can consider acausal trade. Why do you have some expectation that you occupy more than the most utterly insignificant speck within the space of all possible such beings? You do not even occupy 10^-100 of that space, and more likely less than 10^-10^20 of it.
Are you envisaging a system with 100m tracking resolution that aims to make satellites miss by exactly 10m if they appear to be on a collision course? Sure, some of those maneuvers will cause collisions. Which is why you make them all miss by 100m (or more as a safety margin) instead. This ensures, as a side effect, that they also avoid coming within 10m of each other.
It is pointless to try to avoid two satellites coming within 10 metres of each other, if your tracking process cannot measure their positions better than to 100 metres (the green trace in my figure).
This seems straightforwardly false. If you can keep them from approaching within 100 metres of each other, then that necessarily also keeps them from approaching within 10 metres of each other.
If you are inclined to acausally trade (or extort) with anything, then you need to acausally trade across the entire hypothesis space of literally everything that you are capable of conceiving of, because by definition you have no actual information about what entities might be engaging in acausal trade with things somewhat vaguely like you.
If you do a fairly simple expected-value calculation of the gains-of-trade here even with modest numbers like 10^100 for the size of the hypothesis spaces on both sides (more realistic values are more like 10^10^20), you get results that are so close to zero that even spending one attojoule of thought on it has already lost you more than you can possibly gain in expected value.
Thought experiments like "imagine that there's a paperclip maximizer that perfectly simulates you" are worthless, because both you and it are utterly insignificant specks in each other's hypothesis spaces, and even entertaining the notion is privileging the hypothesis to such a ridiculous degree that it makes practically every other case of privileging the hypothesis in history look like a sure and safe foundation for reasoning by comparison.
Yes, biological Boltzmann brains have an insanely high mass and complexity penalty. If you have Boltzmann brains at all, biological ones can be ignored.
It's incredibly much more likely to get non-biological brains that persist for a long time, and perhaps some of those think that they are biological entities.
Once you take that a bit further, you get the idea that the penalty is on computational complexity of supporting the subjective experience, rather than complexity of the subjective experience directly. It's close to the simulation hypothesis, but with simulations carried out on random computers instead of designed ones.
However, we have no idea what sort of fundamental physical processes are necessary for subjective experience, nor how likely they would be in any sort of universe that supports Boltzmann brains at all. Ours obviously does not, and only wild extrapolation over ridiculously larger physical scales weakly suggests that maybe something like our universe might eventually do so.
To take the "ubiquitous Boltzmann brain" hypothesis seriously, you have to assume that we (or at least you the reader since I may not actually exist) are not actually experiencing true physical laws. So that leaves nothing except trying to extrapolate over a distribution of all possible physical laws - a Tegmark IV multiverse - supporting all possible simulations weighted by whatever mathematical rules are most likely to support subjective minds with your subjective experiences.
However, this is pretty quickly self-defeating since the least complex system of physical rules that supports having subjective experiences similar to yours is that they are real, without the added complexity penalty of some completely different physical substrate supporting computation that simulates a universe that merely emulates what you have a memory of experiencing.
So I don't take the "ubiquitous Boltzmann brain" hypothesis seriously at all.