A lot depends on the specifics of the scenario (for both AI and human-upload cases). I don't know anyone who thinks that there's anything important (for survival) that humans do which can't theoretically be done by an electro-mechanical device.
So in theory, upload/AGI would be about as self-sustaining as biological entities (which is: rather fragile, and don't have enough track record at scales that stress the ecosystem to know whether we are).
Presumably, the robots are a little more rational than humans in terms of how they maintain and replenish their resources, and how they ration themselves (aka: each other) to stay within bounds of current sustainability. So, even more unknown, but plausibly more sustainable than biological humans.
Wow. We have extremely different beliefs on this front. IMO, almost nothing is retroactively funded, and even high-status prizes are a tiny percentage of anything.
Any workplace will tell you that of course they want to reward good work even after the fact,
No workplace that I know of DOES retroactively reward good work if the employee is no longer employed there. Most of the rhetoric about it is just signaling, in pursuit of retention and future productivity.
people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future
It seems likely that they'll accept evolution, and still not feel constrained by it, and pursue more directed and efficient anti-entropy measures. It also seems extremely likely that they'll use a decision theory that actually works to place them in the universe they want, which may or may not include compassion for imaginary or past or otherwise causally-unreachable things.
(edit to clarify the decision-theory comment)
saying that people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future.
It's not clear that any likely decision theory, let alone a non-wrong one, requires fully-acausal beliefs or actions. Many of them do include a more complete causality diagram than CDT does, and many acknowledge the the point-of-decision is often much different than the apparent one. But they're all basically consequentialist, in that they believe that actions can influence future states of the universe.
there's a lot of fully-unknown possibilities. For me, I generalize most "today's fundamentals don't apply" scenarios into "my current actions won't have predictable/optimizable impact beyond the discontinuity", so I don't think about specifics or optimization within them - I do think a bit about how to make them less likely or more tolerable, but I can't really quantify.
Which leaves the cases where the fundamentals DO apply, and is ONLY short- and medium-term optimizations. Nothing I plan for in 100 years is going to happen, so I want to take care of myself and my family in coming decades in cases where things don't collapse or go too weird. current income > expenses, and reasonable investment strategy (10- and 30-year target date funds, unless you know better, which you don't) cover the most probability-weight for the next few decades, contingent on any current systems continuing that long.
Your suggestion of social capital (not sure political capital is all that durable, but maybe?) is a very good one as well - having friends, especially friends in different situations (another country, perhaps) is extremely good. First, it's fun and rewarding immediately. Second, it's a source of illegible support if things go crazy, but not extinction-crazy.
Hmm. I'm not sure what you're disgusted by. If the complaint is that humans (all/most?) aren't pure, and have motives that seem gross and base, you're probably right, but also probably not well-served by being disgusted - finding beauty in imperfection or just curiosity about what makes us tick might be better. That said, I'm not one to yuck your ... yuck.
If the complaint is that utility theory is itself gross because it allows it, I don't agree. It's still a useful model, just that reality clashes with your preferences.
Why are you adding the word "primitive" to your descriptions? Utility maximization should encompass ALL desires and drives, including the beautiful and holy. You can even include altruism - if you seek other's satisfaction (or at least expressions thereof - you don't have direct access to their experiences), that's perfectly valid.
Nice that some newer devices make it even easier (fall detection on smartwatches, for instance). Remember, it's actually pretty easy, though, if you just put it to music: https://www.youtube.com/watch?v=HWc3WY3fuZU
I'd expect that most specifics about those topics, and their relative priority to other things you're currently seeking and making tradeoffs against, have changed and will change significantly.
The amount of generalization that would make a value unchanging also makes it useless for prediction or decision-making.
Having worked on large-scale non-safety-critical (think massive enterprise and infrastructure-support systems at large cloud providers) for a long time, one of the biggest lessons is the shape of the cost-to-reliability curve.
after about 3 9s, each increment of an -ity (availability, data durability, security, etc.) is far more expensive than the improvement (which is already exponential). This cost is not just financial, it's a cost in features (don't add stuff that's not simple enough to prove correct), in agility (can't add things quickly, everything requires more specification and implementation proof than you think), and in operations (have to watch it more closely, react to non-harmful anomalies, etc.).
I suspect Moloch will prevent any serious slowdown-for-safety desires. Anyone truly serious about being safe will get outcompeted and be made irrelevant. To that analogy, once the knowledge existed to create the bomb, it was inevitable that SOMEONE would risk igniting the atmosphere, so it probably should be us, now, rather than delaying 5-10 years so it can be Russia (or now, China).
Hmm. I wonder what it'd take to create a no-ui, API-only, read-only mirror of LW data. For most uses, a few minute delay would cause no harm, and it could be scaled for this use independently of the rest of the site. If significant, it could be subscription-only - require auth and rate-limited based on a monthly fee (small, one hopes, to pay for the storage, bandwidth, and api compute).
I would need a first-sync (and resync/anti-entropy) mechanism, but could just poll the allRecentComments to stay mostly up-to-date, and turn this into a single-caller to the LW systems, rather than multiple.
I often go the other way when discussing this topic - humans are as natural as anything else. Parking lots are natural things, arranged by natural animals (humans). Butylated Hydroxytoluene is absolutely natural - there's no way to make the underlying atoms without nature, and the arrangement of them follows every natural law.
Everything real is natural - nature is simply "what is".
Of course, I like this because I recognize it's a discussion about words, with arbitrary meanings that each of us gets to use however we want, and I enjoy pointing that out more than I enjoy trying to get people to conform to my preferred definitions.