Why are you adding the word "primitive" to your descriptions? Utility maximization should encompass ALL desires and drives, including the beautiful and holy. You can even include altruism - if you seek other's satisfaction (or at least expressions thereof - you don't have direct access to their experiences), that's perfectly valid.
Nice that some newer devices make it even easier (fall detection on smartwatches, for instance). Remember, it's actually pretty easy, though, if you just put it to music: https://www.youtube.com/watch?v=HWc3WY3fuZU
I'd expect that most specifics about those topics, and their relative priority to other things you're currently seeking and making tradeoffs against, have changed and will change significantly.
The amount of generalization that would make a value unchanging also makes it useless for prediction or decision-making.
Having worked on large-scale non-safety-critical (think massive enterprise and infrastructure-support systems at large cloud providers) for a long time, one of the biggest lessons is the shape of the cost-to-reliability curve.
after about 3 9s, each increment of an -ity (availability, data durability, security, etc.) is far more expensive than the improvement (which is already exponential). This cost is not just financial, it's a cost in features (don't add stuff that's not simple enough to prove correct), in agility (can't add things quickly, everything requires more specification and implementation proof than you think), and in operations (have to watch it more closely, react to non-harmful anomalies, etc.).
I suspect Moloch will prevent any serious slowdown-for-safety desires. Anyone truly serious about being safe will get outcompeted and be made irrelevant. To that analogy, once the knowledge existed to create the bomb, it was inevitable that SOMEONE would risk igniting the atmosphere, so it probably should be us, now, rather than delaying 5-10 years so it can be Russia (or now, China).
Hmm. I wonder what it'd take to create a no-ui, API-only, read-only mirror of LW data. For most uses, a few minute delay would cause no harm, and it could be scaled for this use independently of the rest of the site. If significant, it could be subscription-only - require auth and rate-limited based on a monthly fee (small, one hopes, to pay for the storage, bandwidth, and api compute).
I would need a first-sync (and resync/anti-entropy) mechanism, but could just poll the allRecentComments to stay mostly up-to-date, and turn this into a single-caller to the LW systems, rather than multiple.
Some values don't change. Citation needed. I can't think of anything that could be reasonably classified as a "value" that is unchanging in humans. And I don't know of any other entities to which "values" can yet be applied.
I'm a goal seeking system. Even less clear. Actually, I don't know you, so maybe it's true for you. It's absolutely not true for me. I'm an illegible, variable, meaning-seeking (along with other, less socially-acceptible-to-admit -seeking) thing.
In real-world entities, the model of terminal values and goal-seeking is highly suspect.
No. If you replace "just" with "partially model-able as", then yes.
There's lots of things we could do, but don't. Generally, the risk/cost is non-zero, even if small, and the recognizable value (that which can be captured or benefit to the decision-maker) is less than that.
I'd probably pay a little bit to see this in the skies while I'm safely on the ground, and even to be in one after the first 10,000 have gone by. But I wouldn't pay enough to make up for the lawsuits and loss of revenue from people who don't like the idea.
reasons I downvoted:
Hmm. I'm not sure what you're disgusted by. If the complaint is that humans (all/most?) aren't pure, and have motives that seem gross and base, you're probably right, but also probably not well-served by being disgusted - finding beauty in imperfection or just curiosity about what makes us tick might be better.
If the complaint is that utility theory is itself gross because it allows it, I don't agree. It's still a useful model, just that reality clashes with your preferences.