Sometimes, you will say a thing!

You: "Proposition A."

A seems to you to be obviously true. It accords with your experience, doesn't violate any of the rules and laws of reality, makes sense with the rest of your model of the world. You'd like to move on from A to the actual more interesting conversation you want to have; A is just some necessary groundwork.

Sometimes, someone else will be DEEPLY SKEPTICAL of A. After all, different people have very different experiences of the world! Bubbles exist, and cultures are not universal.

Or perhaps it's not that they're deeply skeptical of A so much as that their brain reflexively transformed A into a B of which they are deeply skeptical (and they don't even notice that their B is not the same as your A).

Them: "What? How do you know that's true? Can you cite several specific examples? The best example I could come up with was [X], and since [X] is clearly ridiculous, your point is invalid!"

And sometimes, you sit there, absolutely confident that you could, in fact, spend 20 hours and 10,000 words to clear up all of the misunderstandings, and lay out all of the arguments in painstaking slow detail.

But you don't want to do that! You weren't trying to convince Every Rando of the truth of A, and you don't much care if This Rando doesn't get it, or runs off into the woods with their strawman.

But unfortunately, their misinterpretations can anchor others and skew the conversation, and a dangling unanswered "Cite specific examples?" comment accrues upvotes pretty quickly, and generates oft-undeserved skepticism through sheer representativeness. Surely if you had specific examples, you'd give them! Since you didn't give them, you must not have them!

(This, of course, ignores the fact that engagement is costly and effortful. Laying out thoughts takes time. Painstakingly correcting subtle misunderstandings is the work of hours or days or even weeks, involving a lot of getting into the weeds.)

And you didn't want to get into the weeds. You just wanted to make A explicit, so you could move on to the actually interesting conversation that takes place one level higher. You can see the ways that they subtly went wrong, and could, if you wanted, gently deconfuse them, one thought at a time, but you don't want to have to do that, as a prerequisite for having any interesting conversations at all.

It's one thing if you're feeling generous, and charitable, and are willing to donate your time and effort to laboriously untangle someone else's thoughts; it's another thing entirely if you must satisfy every sealion, out of your own spoon supply.

Questions and strawmen are cheap!

Introducing: Pay Me To Make You Less Wrong.

Users signed up for PMTMYLW have a toggle in their replies; switching that toggle to the "on" position adds the following automatic message to any comment subthread:

This user is now responding to you at a rate of $0.20*K/word, where K is the value of their strong upvote. If you do not consent to pay $0.20*K/word, you may instead retract the comment you have socially coerced them to reply to.

Essentially, PMTMYLW repairs the effort asymmetry, in which it is much easier to flood the airwaves with asymmetric demands for rigor, or misleading strawmen, than it is to meet those demands or refute those strawmen. It makes it no longer costless for a given user to bog the conversation down, and rewards those who unbog it.

Now, users who take the time to effortfully deconfuse those around them will be financially compensated for their contributions, at market-rate-times-their-karma-as-a-LW-user, and the costs of soul-sucking, motivation-draining nitpickery will be shifted back onto the shoulders of those who created them.

Happy April Fool's Day!

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 4:20 PM

If I (unironically) were willing to offer to compensate someone for the time they spend answering my question, is there any convenient way to do it?

DM them and offer to pay them via PayPal? That’s how LW rewards contest winners, it’s just that on a smaller scale.

Ideally you answer the question once and then link newcomers to the response you wrote the last time.

Would it be worth my time to try to write a definitive-for-now "short version" of "why unaligned AGI will destroy the world" for normies/newbies with links for further reading? I think I could do a decent job. Or is there already a decent FAQ somewhere on a wiki?

I think How To Write Quickly While Maintaining Epistemic Rigor helps with this problem. Basically, rather than creating the perfect example or searching for perfect proof, one describes the process by which one came to the conclusion. Maybe one's conclusion isn't nuanced enough, but at least one gets a case out there and then the border between that case and other cases can be established over time.

If someone disagrees strongly, you can then say that maybe they have some other case in mind where the conclusion doesn't apply, and kick it back to them to describe the case. If the case they describe sounds reasonable, then you can go "I guess you may be right; it may be worth thinking about your point in some cases too", while if they don't have any good countercase there's not much reason to pay attention to them.

I guess I should add:

A deeper dynamic that I think sometimes plays a role is Aumann's agreement theorem. If person X says A and person Y says not-A, then "clearly" only one of them can be right, and so the fact that there's a persistent disagreement suggests that there's something wrong with one of them. This may be upsetting to X and Y (especially the more-identifiable individual of the pair, e.g. the more public person or similar) because it reduces their reputation.

Superficially, the solution may often just be that A holds in some cases and not-A holds in other cases, which can be used to resolve the conflict by having each person mention the case that they are applying it in, and then mutually agreeing that A holds in X's case while not-A holds in Y's case. Aumann agreement achieved!

However, the fact that there are people who have Strong Opinions suggests that their opinions are part of some conflict or something, as otherwise they don't necessarily have much reason to care what other's think. And that means that if one of the cases are mentioned, then it brings up that conflict.

And that's not necessarily so straightforward, because if Y has a case where not-A holds, then X might have trouble acknowledging that not-A might hold in Y's conflict, because that would mean taking a side in Y's conflict, and therefore implicitly taking a side against the person Z who Y is in conflict with.

The straightforward answer is just for person X to say that they don't know anything about Z's side of the story in the conflict. In principle that should be fine as a solution (though in more complex scenarios there might be some more complicated things going on, e.g. information cascades).

I believe that Aumann agreement doesn't apply to humans because, among other things, we do not have common priors.

It seems to apply strongly enough that OP is dissatisfied with dynamics like:

But unfortunately, their misinterpretations can anchor others and skew the conversation, and a dangling unanswered "Cite specific examples?" comment accrues upvotes pretty quickly, and generates oft-undeserved skepticism through sheer representativeness.

... where a person has a huge effects on people's beliefs just by saying a few things.

That's ... not [really/quite] about Aumann.

(I often get frustrated at the "pop culture" understanding of Aumann, which is about as wrong as the pop culture understanding of Dunning-Kruger or the pop culture understanding of Freud. I agree the above is about the pop culture understanding of Aumann.)

The way I interpret it as being about Aumann:

By-default, people would Aumann-agree towards the original post. However, if someone raises doubt, they may Aumann-agree that doubts are plausible, which un-updates them from the original post.

I feel like I would live on the internet if I had a successful version of the PMTMYLW business model, haha. 

On a more serious note, one of the most important arts of epistemically valuable writing is finding a way to communicate your meaning densely without leaving any room for misinterpretation. Propositions which aren't obvious to everyone, and which some interpret as super weapons or some other kind of false but locally advantageous belief infrastructure will naturally attract criticism.

This is an inherently difficult task, but not writing spaghetti code posts in the first place can prevent a lot of debugging and vulnerability patching later as people attack one's posts both for good and bad faith reasons. 

The incentive problem here is that spaghetti code posts with vulnerabilities drive disagreement, and disagreement is a form of engagement, and thus bad writing is incentivized by social media for gaining an audience and shaping the Overton Window of discourse.