Almost all of the obvious interventions (excercise, at least basic medical and dental care, non-pathological eating, and reasonable limits on nootropic and recreational drugs or alcohol) have massive short- and medium-term benefits. One doesn't need to do much long-term math or have anywhere near calibrated estimates to understand the EV of these things.
Fully optimizing toward longevity and away from enjoyment and ease in the short-term may be correct, for some estimates, but it's not as clear. And irrelevant for the vast majority of daily behaviors.
My only surprise is that this is surprising in any way. I'd put the numbers at:
Fortunately, most democratic systems have time-based checkpoints where the threshold for change is a lot lower (people in the 2nd and 3rd groups can make change more easily during elections), and the activist numbers tend to be MUCH higher for policies that hugely impact that cadence.
Almost everything that's easy for the superrich and not for the middle-class is due to human resource scarcity, or "just cost, but in a way that is non-scalable in the underlying non-monetary costs), not just tech availability. Some of the big ones:
One topic I'm surprised has yet to be solved: discreet (usable on a crowded train without bothering others) hands-free inputs for phones. Throat Mics have been promised forever, but never worked well enough for most people. This isn't exactly a tech that the rich have today - they just use "real privacy" instead - they don't ride crowded trains. But it would give some of the productivity and convenience of private transport to the masses.
thanks for the conversation, I'm bowing out here. I'll read further comments, but (probably) not respond. I suspect we have a crux somewhere around identification of actors, and mechanisms of bridging causal responsibility for acausal (imagined) events, but I think there's an inferential gap where you and I have divergent enough priors and models that we won't be able to agree on them.
Ok, break this down a bit for me - I'm just a simple biological entity, with much more limited predictive powers.
It's worth simulating a vast number of possible minds which might, in some information -adjacent regions of a 'mathematical universe' be likely to be in a position to create you
This either well beyond my understanding, or is sleight-of-hand regarding identity and use of "you". It might help to label entities. Entity A has the ability to emulate and control entity B. It thinks that somehow its control over entity B is influential over entity C in the distant past or imaginary mathematical construct, who it wishes would create entity D in that disconnected timeline.
Nope, I can't give this any causal weight to my decisions.
I have a very hard time even justifying 1/1000. 1/10B is closer to my best guess (plus or minus 2 orders of magnitude). It requires a series of very unlikely events:
1) enough of my brain-state is recorded that I COULD be resurrected
2) the imagined god finds it worthwhile to simulate me
3) the imagined god is angry at my specific actions (or lack thereof) enough to torture me rather than any other value it could get from the simulation.
4) the imagined god has a decision process that includes anger or some other non-goal-directed motivation for torturing someone who can no longer have any effect on the universe.
5) no other gods have better things to do with the resources, and stop the angry one from wasting time.
Note, even if you relax 1 and 2 so the putative deity punishes RANDOM simulated people (because you're actually dead and gone) to punish YOU specifically, it still doesn't make it likely at all.
It's worth putting a number on that, and a different one (or possibly the same; I personally think my chances of being resurrected and tortured vary by epsilon based on my own actions in life - if the gods will it, it will happen, if they don't, it won't) based on the two main actions you're considering actually performing.
For me, that number is inestimably tiny. I suspect a fairly high neuroticism and irrational failure to limit the sum of their probabilities to 1 of anyone who thinks it's significant.
Within causal decision theory this is true, but if it were true in general then acausal decision theory would be pointles
Acausal decision theory is pointless, sure. Are there any? TDT and FDT are distict from CTD, but they're not actually acausal, just more inclusive of causality of decisions. CDT is problematic only because it doesn't acknowledge that the decisions being made themselves have causes and constraints.
First, a generalized argument about worrying. It’s not helpful, it’s not an organized method of planning your actions or understanding the world(s). OODA (observe, orient, decide, act) is a better model. Worry may have a place in this, as a way to remember and reconsider factors which you’ve chosen not to act on yet, but it should be minor.
Second, an appeal to consequentialism - it’s acausal, so none of your acts will change it. edit: The basilisk/mugging case is one-way causal - your actions matter, but the imagined blackmailer's actions cannot change your behavior. If you draw a causal graph, there is no influence/action arrow that leads them to follow-through on the imagined threat.
Good to think about this and try to put together scenarios to identify interesting potential levers or just signs that we might react to.
Unfortunately, this relies on some unstated assumptions about what "cooperation" actually means, and how resource control works - what is the pivot from companies controlling resources (and growing based on investment/public-support) to the AIs controlling them directly and growing based on ... something else?
At these scales, "ownership" becomes a funny concept. Current mechanisms for acquiring more compute and for sabotoging or assisting your competitors are unlikely to hold for more than a few more doublings.