Dagon

Just this guy, you know?

Wiki Contributions

Comments

Answer by DagonApr 19, 202420

Yes!  No!  What does "richer" actually mean to you?  For that matter, what does "we" mean to you (since the existing set of humans is changing hour to hour as people are born, come of age, and die, and even in a given set there's an extremely wide variance in what they have and in what's considered rich).

To the extent that GDP is your measure of a nation's richness, then it's tautological that increasing GDP makes the nation richer.  The weaker argument that it (often) correlates (not necessarily causes) with well-being (in some averages and aggregates) is more defensible, but makes it unsuitable for answering your question.

I think my intuition is that GDP is the wrong tool for measuring how "rich" or "overall satisfied" people are, and simple sum or average is probably the wrong aggregation function.  So I fall back on more personal and individual measures of "well-being".  This, for most people I know, and as far as I can tell, the majority of neurotypical people, is about lack of worry for near- and medium-term future, access to pleasurable experiences, and social acceptance among accessible sub-groups (family, friends, neighbors, online communities small enough to care about, etc.).

For that kind of  "general current human wants", a usable and cheap shared-but-excludable VR space seems to improve things for a lot of people, regardless of what happens to GDP.  In fact, if consumption of difficult-to-manufacture-and-deliver luxuries gets partially replaced by consumption of patterns of bits, that likely reduces GDP while increasing satisfaction.  

There will always be needs for non-virtual goods and experiences - it's not currently possible to virtualize food's nutrition OR pleasure, and this is true for many things.  Which means a mixed economy for a long long time.  I don't think anyone can tell you whether this makes those things cheaper or more expensive, relative to an hour spent working online or in the real world.

Dagon7h20

Thanks for the conversation and exploration!  I have to admit that this doesn't match my observations and understanding of power and negotiation in the human agents I've been able to study, and I can't see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner.

I can't tell if you're describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I'm not convinced.  This will likely be my last comment for awhile - feel free to rebut or respond, I'll read it and consider it, but likely not post.

Dagon1d40

These are probably useful categories in many cases, but I really don't like the labels.  Garbage is mildly annoying, as it implies that there's no useful signal, not just difficult-to-identify signal.  It's also putting the attribute on the wrong thing - it's not garbage data, it's data that's useful for other purposes than the one at hand.  "verbose" or "unfiltered" data, or just "irrelevant" data might be better.  

Blessed and cursed are much worse as descriptors.  In most cases there's nobody doing the blessing or cursing, and it focuses the mind on the perception/sanctity of the data, not the use of it.  "How do I bless this data" is a question that shows a misunderstanding of what is needed.  I'd call this "useful" or "relevant" data, and "misleading" or "wrongly-applied" data.

To repeat, though, the categories are useful - actively thinking about what you know, and what you could know, about data in a dataset, and how you could extract value for understanding the system, is a VERY important skill and habit.

Dagon1d20

I've seen links to that video before (even before your previous post today).  Is there a text or short argument that justifies "Non-naive cooperation is provably optimal between rational decision makers" ALONG WITH "All or any humans are rational enough for this to apply"? 

I'm not sure who the "we" is in your thesis.  If something requires full agreement and goodwill, it cannot happen, as there will always be bad actors and incompatibly-aligned agents.  

Dagon1d42

What does "stronger" mean in this context?  In casual conversation, it often means "able to threaten or demand concessions".  In game theory, it often means "able to see further ahead or predict other's behavior better".  Either of these definitions imply that weaker agents have less bargaining power, and will get fewer resources than stronger, whether it's framed as "cooperative" or "adversarial".

In other words, what enforcement mechanisms do you see for contracts (causal OR acausal) between agents or groups of wildly differing power and incompatible preferences?

Relatedly, is there a minimum computational power for the stronger or the weaker agents to engage in this?  Would you say humans are trading with mosquitoes or buffalo in a reliable way?

Another way to frame my objection/misunderstanding is to ask: what keeps an alliance together?  An alliance by definition contains members who are not fully in agreement on all things (otherwise it's not an alliance, but a single individual, even if separable into units).  So, in the real universe of limited (in time and scope), shifting, and breakable alliances, how does this argument hold up?

Dagon1d20

What's the desired outcome of this debate?  Are you looking for cruxes (axioms or modeling choices that lead to the disagreement, separate from resolvable empirical measurements that you don't disagree on)?  Are you hoping to update your own beliefs, or to convince your partner (or readers) to update theirs?

I do not necessarily endorse my comments in this piece.

That's likely to need some explanation about why it's valuable to put such comments on LessWrong.  It's fine to put non-endorsed views here, but they should be labeled as to why they're worth mentioning.  Putting misleading or known-suspect arguments-as-soldiers on LW, especially mixed in with things you DO support, is a mistake.

Dagon1d20

I think that insisting on comparing unmeasurable and different things is an error.  If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong.  If you make up numbers that don't fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.

Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don't.  There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse).  There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.

Dagon2d30

it’s very possible that it could become a practical problem at some point in the future.

I kind of doubt it.  Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant.  Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.

Do you think a logarithmic scale makes more sense than a linear scale?

Yes, but it probably doesn't fix the underlying problem that quantifications are unstable and highly variable across agents.

Dagon2d20

From their side.  Your explanation and arguments against that seem reasonable to me.

Dagon2d129

A number of us (probably a minority around here) don't think "stacking" or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals.  There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.  

Load More