Dagon

Just this guy, you know?

Wiki Contributions

Comments

Dagon17h40

These are probably useful categories in many cases, but I really don't like the labels.  Garbage is mildly annoying, as it implies that there's no useful signal, not just difficult-to-identify signal.  It's also putting the attribute on the wrong thing - it's not garbage data, it's data that's useful for other purposes than the one at hand.  "verbose" or "unfiltered" data, or just "irrelevant" data might be better.  

Blessed and cursed are much worse as descriptors.  In most cases there's nobody doing the blessing or cursing, and it focuses the mind on the perception/sanctity of the data, not the use of it.  "How do I bless this data" is a question that shows a misunderstanding of what is needed.  I'd call this "useful" or "relevant" data, and "misleading" or "wrongly-applied" data.

To repeat, though, the categories are useful - actively thinking about what you know, and what you could know, about data in a dataset, and how you could extract value for understanding the system, is a VERY important skill and habit.

Dagon18h20

I've seen links to that video before (even before your previous post today).  Is there a text or short argument that justifies "Non-naive cooperation is provably optimal between rational decision makers" ALONG WITH "All or any humans are rational enough for this to apply"? 

I'm not sure who the "we" is in your thesis.  If something requires full agreement and goodwill, it cannot happen, as there will always be bad actors and incompatibly-aligned agents.  

Dagon21h42

What does "stronger" mean in this context?  In casual conversation, it often means "able to threaten or demand concessions".  In game theory, it often means "able to see further ahead or predict other's behavior better".  Either of these definitions imply that weaker agents have less bargaining power, and will get fewer resources than stronger, whether it's framed as "cooperative" or "adversarial".

In other words, what enforcement mechanisms do you see for contracts (causal OR acausal) between agents or groups of wildly differing power and incompatible preferences?

Relatedly, is there a minimum computational power for the stronger or the weaker agents to engage in this?  Would you say humans are trading with mosquitoes or buffalo in a reliable way?

Another way to frame my objection/misunderstanding is to ask: what keeps an alliance together?  An alliance by definition contains members who are not fully in agreement on all things (otherwise it's not an alliance, but a single individual, even if separable into units).  So, in the real universe of limited (in time and scope), shifting, and breakable alliances, how does this argument hold up?

Dagon1d20

What's the desired outcome of this debate?  Are you looking for cruxes (axioms or modeling choices that lead to the disagreement, separate from resolvable empirical measurements that you don't disagree on)?  Are you hoping to update your own beliefs, or to convince your partner (or readers) to update theirs?

I do not necessarily endorse my comments in this piece.

That's likely to need some explanation about why it's valuable to put such comments on LessWrong.  It's fine to put non-endorsed views here, but they should be labeled as to why they're worth mentioning.  Putting misleading or known-suspect arguments-as-soldiers on LW, especially mixed in with things you DO support, is a mistake.

Dagon1d20

I think that insisting on comparing unmeasurable and different things is an error.  If forced to do so, you can make up whatever numbers you like, and nobody can prove you wrong.  If you make up numbers that don't fully contradict common intuitions based on much-smaller-range and much-more-complicated choices, you can probably convince yourself of almost anything.

Note that on smaller, more complicated, specific decisions, there are many that seem to be inconsistent with this comparison: some people accept painful or risky surgery over chronic annoyances, some don't.  There are extremely common examples of failing to mitigate pretty serious harm for distant strangers, in favor of mild comfort for oneself and closer friends/family (as well as some examples of the reverse).  There are orders of magnitude in variance, enough to overwhelm whatever calculation you think is universal.

Dagon1d30

it’s very possible that it could become a practical problem at some point in the future.

I kind of doubt it.  Practical problems will have complexity and details that overwhelm this simple model, making it near-irrelevant.  Alternately, it may be worth trying to frame a practical decision that an individual or small group (so as not to have to abstract away crowd and public choice issues) could make where this is important.

Do you think a logarithmic scale makes more sense than a linear scale?

Yes, but it probably doesn't fix the underlying problem that quantifications are unstable and highly variable across agents.

Dagon2d20

From their side.  Your explanation and arguments against that seem reasonable to me.

Dagon2d129

A number of us (probably a minority around here) don't think "stacking" or any simple, legible, aggregation function is justified, not within an individual over time and certainly not across individuals.  There is a ton of nonlinearity and relativity in how we perceive and value changes in world-state.  

Dagon2d20

I think this misses out on the fact that utility is always indirect - there is a function from world-state to utility that each rational agent has, so there can never be a lottery that directly awards utility.  Meaning you can model the utility valuation linearly, but the mapping of resources to utility with logarithmic declining marginal utility. 

Dagon2d42

That seems an odd motte-and-bailey style explanation (and likely, belief.  As you say, misgeneralized).

I will agree that humans can execute TINY arbitrary Turing calculations, and slightly less tiny (but still very small) with some external storage.  And quite a bit larger with external storage and computation.  At what point is the brain not doing the computation is perhaps an important crux in that claim, as is whether the ability to emulate a Turing machine in the conscious/intentional layer is the same as being Turing-complete in the meatware substrate.  

And the bailey of "if we can expand storage and speed up computation, then it would be truly general" is kind of tautological, and kind of unjustified without figuring out HOW to expand storage and computation while remaining human.

Load More