My mom didn’t buy it. “This is all sounding a bit crazy, Taylor,” she said to me. And she’s usually primed to believe whatever I say, because she knows I’m smart.The problem is that these stories are not believable. True, maybe, but not easy to believe. They fail the “mom test”. Only hyper-logical nerds can believe arguments that sound like sci-fi.
Maybe only hyper-logical nerds can believe arguments that sound like sci-fi, but your mom only has to believe you. The question is whether you are believable, or whether you're "starting to sound a bit crazy, Taylor".
That's her sign to you that you need to show that you can appreciate how crazy it sounds and maintain your belief. Because it does sound a bit crazy. It's quite a leap from demonstrated reality, and most of the time people are making such leaps they're doing fiction/delusion and not actually calling things right in advance. The track record of people saying crazy shit and then insisting "It's not crazy I swear!" isn't good. If instead, you meet her where she's at and admit "Yeah. I know. I wish it was", it hits differently.
I can't remember if I've talked to my mom about it, but if I had to talk to her about it, I'd probably say something like "You hear of the idea that AGI is going to be completely transformative, and will have the power to kill us all? Yeah, that's likely real", and she'd probably say something like "Oh.". That's basically how it went when I told her the world was about to change due to the upcoming pandemic. I didn't "try to persuade her" by giving her arguments that she's supposed to buy, let alone spinning stories about how a bat had a virus and then these researchers genetically modified it to better attack humans. I just told her "Here's what I believe to be true", so that she could prepare. I was open to why it was that I believed it, but the heavy lifting was done by the fact that I genuinely believed it and I came off more like I was trying to share information so that she could prepare than like I was trying to convince her of anything.
In your shoes, besides making sure to acknowledge her point that it sounds crazy, I'd do a lot of genuine curiosity about her perspective. Has she ever experienced something that sounded crazy as fuck, and then turned out to be real? Not as a rhetorical question, just trying to understand where she's coming from. Is she aware of the massive impact drones are having in the war in Ukraine? Has she thought about what it felt like to be warned of the power of nuclear weapons before anyone had seen them demonstrated?
These aren't "rhetorical questions", asked as ways of disguising a push for "Then you should stop being so confident!" but as a genuine inquiry. Maybe she has experienced something "crazy" turning out to be real, and noticing will change her mind. Or maybe she hasn't. Or maybe it seems different to her, and learning in what way it seems different will be relevant for continuing towards resolving the disagreement. Giving people the space to share and examine their perspective without pressure is what allows people to have the experiences that shift views. Maybe she hasn't had the experience of running from a terminator drone, or being outsmarted at every turn, but you could give her that experience -- by pointing out the shared starting point and asking her to imagine where that goes.
She'd still have to take you up on that invitation, of course. If I'm wrong about being able to convince my own mom in a single line, it'd be for this reason. Maybe the idea would freak her out so much that she would be motivated to not understand. I don't think she would, but maybe. And if so, that's a very different kind of problem that you deal with by making arguments which are "more believable".
The distinction between "positive punishment" and "negative punishment" is useful here, and I think a lot of the confusion around this topic comes from conflating the two -- both intentionally and otherwise.
If you hit me for no reason, "positive punishment" would be hitting you back in hopes that you stop hitting me. I have to actually want you to hurt, and it can easily spiral out of hand if you hit me for hitting you for hitting me.
"Negative punishment" would be just not hanging out with people who hit me, because I don't like hanging out with people who hit me. I don't have to want you to hurt at all in order to do this, in the same way that I love my puppy and don't hold anything against her, but when she's jumping on me so much that I can't work I might have to lock her out of my room. Even if you get offended and decide to respond in kind with some negative punishment of your own, that just means you decide to stop hanging out with me too. Which obviously isn't a problem. And heck, by your (IMO appropriate) definition of "punishment" this isn't even punishment because it's not done in order to affect anyone's behavior. It's just choosing to abstain from negative value interactions.
We can't restrict "negative punishment" without restricting freedom of association and freedom of expression, and we also don't have to because sharing truth and making good choices are good, and there's no threat of spiraling out of control. It may hurt a lot to be locked out of all the fun spaces, and it may feel like a punishment in the operant conditioning sense, but that doesn't mean there's any intent to punish or that it is punishment in the sense that's relevant for this post.
What we have to be careful about, is when people try to claim to be doing freedom of association/expression ("negative punishment") while actually intending to do positive punishment. This comes up a lot in the debates between "You're trying to stifle free speech!" and "Free speech doesn't mean freedom from consequences!"/"I'm just using my free speech to criticize yours!". If you're responding to obnoxious speech with speech like "I'm gonna stone you if you don't shut up" then you're obviously trying to conflate threats of violent positive punishment with "merely freedom of expression", but it gets much more subtle when you say "Ugh, I don't see how any decent person could listen to that guy". Because is that an expression of curiosity from someone who would love to fill in their ignorance with empathy and understanding? Someone who harbors no ill will, just doesn't find that guy interesting? Or is it someone who actively dislikes the person speaking, and would like to see them change their behavior, and even hurt in order to do so?
This attempt to hurt people in order to change their behavior is positive punishment masquerading as negative punishment, and as such has all the same problems with positive punishment. If I try to give you the silent treatment because you didn't say you liked my new shirt, and you give me the silent treatment back, then it can easily escalate into losing a friendship that if we're honest we both wanted. Because it was never actually "I don't find any value here, so I'm pulling back", it was "I'm gonna pull back anyway, in hopes of hurting him enough to change his behavior".
People like Bob, Carol, and Dave are indeed at risk of confusing genuinely prosocial freedom of association and expression with positive punishment, because people like Alice are at risk of doing the latter while pleading the former.
However, they're also likely to recognize it as sincere if Alice looks more like she's doing the former than the latter. If the don't find out about what Mallory did until they ask Alice why she doesn't hang out with Mallory anymore, they're unlikely to see her answer as punishment, for example. Similarly, if she comes off more like "Careful with the puppy, she's friendly but sometimes too friendly!", that's technically communicating a bad thing, but it comes off very differently than if she were to get visibly upset and say "That dog is not well disciplined, it's not a good dog and you should know that".
It's not always clear whether a person is genuinely "just sharing information" or secretly trying to positively punish, but they are indeed distinct things, and having the distinction clear makes it easier to judge.
Okay, cool.
It would be better if the sequence would succeed in people having a clear idea of how they could actually apply the concepts to their lives and then doing that.
What would that look like, to you? Last comment it sounded like you were saying "More emphasis on the concrete take home lessons rather than burying in footnotes", but in this comment it sounds like you're pointing more at the motivation aspect which seems quite different -- and more in line with my focus. I definitely can't pass your ITT yet.
I'll share a little more about how I'm trying to do that, and maybe you can help me figure out how to do it better.
It comes back to our earlier discussion on expectation=intention=setpoint. Summarizing, you were pointing at the value of providing directions in helping people get from point A to point B, while I'm focused more on getting their intent set in the first place. I don't disagree about the importance of knowing how to get from A to B, but I find that like you're saying this time, a lot of the time motivation is limiting. If people aren't actually aiming at point B then they won't follow directions. If they are, then they're likely to ask for directions as needed. The opening example of the girl in the jacuzzi illustrates this well, as my object level advice wasn't anything her friends couldn't have told her, but the difference is that she asked for my input and dismissed theirs.
It's the same thing, on the meta level. Part of what I'm trying to do is motivate readers by demonstrating how solvable these things are and making more concrete and tangible that sense that more is possible (amusingly enough, the top comment chain there is about how nice it'd be to have akrasia solved), and part of what I'm trying to do is provide the compass and sextant needed to start navigating towards a solution. When you say "reading it motivated me to look at my procrastination more as a puzzle to be solved than something that's given", and "The sequence suggest that if I do procrastinate, then there's likely a reason why I'm procrastinating so applying the sequence to the problem was about looking for that reason", this is exactly the kind of thing I'm going for.
But it's not just that. When I hurt my foot and needed the prodding to try that technique, I had some faulty presuppositions that kept me from doing that stuff by default, which is why I needed the prodding and even the "technique". By the time I helped the kid in the fire poker situation, I had some insights which deflated some of these presuppositions, but I still had no idea how to apply any of the insights I'd learned to help this kid. Yet this lack of understanding of how to apply the insights did not stop me from behaving in accordance with my new perspective, and this new perspective brought about different results. Object level application of these insights can actually lead meta level understanding of what is being applied and why it works.
I'm aiming to directly undermine those presuppositions and begin dissolving the connective tissue that gets people stuck in the first place, by showing how things that look like "psychological problems" even in difficult or "impossible" situations turn out over and over to be disagreements propped up by unseen flinches. Like, "Maybe this problem isn't a given". "Maybe things ain't as they seem. What would that be like?". Trying to cast doubt on the pretense of certainty with which these disconnects are held together, so that when it gets to the footnote of "Maybe listen to yourself?" it doesn't take suspending people up in the air to get through. Or having a crush on someone, or whatever.
The idea isn't just that you turn towards existing problems as puzzles, it's also that next time there's something that would have gone over threshold, the idea that there are things you "can't get yourself to do" feels less credible and less enticing, and is less likely to ensnare you. So next time it comes out as "Ugh. I hate working on patents", naturally evokes "What's so bad about working on patents?", and applies the same active ingredient of "turn towards the objection" without ever needing to understand how to apply these insights to akrasia -- because nothing will stick long enough to earn the diagnosis. Noticing what's happening differently is important too because that can help us be intentional about the direction we choose to move, but it doesn't have to lead application.
I'm not sure how to give more emphasis to things like "Actually think through whether the objections your mind comes up might have merit after all" without detracting from the emphasis on "These things which we're so sure are intractable actually melt away when we aim true". And for my friend, if I were to try to convey the former before the latter has sunk in, and without suspending her above concrete, she'd have concluded "Tried that, didn't work", and left with nothing more than immunization against the solution. If things aren't going to come across 100% clearly, I'd rather people like her leave correct in thinking "Okay but I don't know how to put this to use" than incorrect in thinking "I do". Because at least the former at least leaves room for the desire to ask for directions.
Separately from how well it's working out, does that help make more sense of the choices I've made in presentation?
How would you do it, from the writer's side? What would you like to see/what would make you more likely to put things to practice, from the reader's side? I've tried to write in the way that I would have liked to see as a reader, but that doesn't necessarily match well to the actual readers.
That looks consistent with a rifle to me, though there are are really only a few moments as he's transitioning from the roof to the ground that it's easy to see he has something long in there.
The more interesting mismatch to me is with the terminal performance of that bullet. The lack of an exit is definitely not consistent with a 30-06 at 140yd striking his neck from that angle. I think I know how to explain it, but I'm curious if anyone else has tried to figure out how that could happen.
One thing I find helpful, is to outsource this to my mental model of other people, or actual other people. If you come at them with "This is definitely true", what kind of objections do they come up with? Not just explicit objections that they say, but also implicit objections that they don't know how to articulate. Once you've explored that space and know that all roads lead to them being fully on board -- again, not just in explicit claims but in revealed belief as well -- then you know that at least they can't come up with a reason you might be wrong.
It's still only as good as your other people, but if no one you know can find fault in your reasoning that's not a bad start.
This point seems very valuable and a bit hidden in the footnote. I think while the example in this post is good at illustration, it's not good at getting people to actually apply the concept.
Thanks for the feedback. After mulling over it for a while, I think I agree. Though my thoughts on whether that's appropriate are more complicated, and I'm not 100% I understand what you mean exactly.
Can you expand more on where you're coming from, and what you think it'd look like to be better at the latter (either with or without sacrificing the former)?
Also, is your akrasia progress an example that came from reading this post, or something you picked up on elsewhere which happens to mirror this post? If the former, what made it click, exactly?
A contemporary example of inadequate ethical heuristics: Public discussion of group differences
I think the word "public" is doing a lot of heavy lifting here. I can think sanely about group differences on my own, and it would be easy to have that conversation with you. I'd also expect to be able to handle it with individuals who are strongly attached to one heuristic or the other, and perhaps even in small groups. I wouldn't tweet about it. Nor would I strike up that conversation in a crowded room of strangers.
The problem isn't that I can't think sanely about such topics, but that they can't -- and because descriptively speaking "they" and "I" is a better descriptor than "we" in those cases. In smaller scales, thinking as "we" is easier. If I say something that my close friend would have disagreed with, they know me well enough to appropriately weight the fact that I'm saying it, and that immediately changes the joint perception as "we" in ways that tweeting does not. And when the friend doesn't buy it, the amount of "Hm, maybe I'm wrong" is very manageable, so engaging with the kind of humility that makes it "shared exploration towards truth" rather than "an attempt to manipulate" is easy. Even "bold" statements like "You're doing it all wrong [and I know this because there's nothing you've thought of that I haven't considered which could justify a change of mind]" are fairly achievable in this context because you do know your friend fairly well, and they know you know them fairly well, etc.
Try to step up in scale though, and it gets tougher. Either you have to back off a bit because maybe the larger group knows more things you don't, or you have to know more things including how more distant strangers (incorrectly) model things. As the group you're trying to move becomes larger and more powerful, the push back you invite becomes louder and more meaningful. Have you found the courage to pick a fight that big, and still humble yourself appropriately, if necessary? Because if not then you're setting yourself up to fold prematurely, before standing strong enough to evoke the kind of evidence that would genuinely change your mind.
An analogy that comes to mind is "injection locking", as demonstrated by groups of metronomes synching up. Couple yourself too tightly to a large coordinated mass, and you're likely to find your heart compelled to the same wavelength, even as you recognize that it's wrong (whoops, there goes your sanity). Decouple too much, and even if you're not missing anything from the larger group, you're not helping the group either. The trick is to regulate your coupling such that you can both influence and be influenced in ways that are tracking truth, without losing track of genuinely valuable wavelengths you've entrained to.
And if you try to try to "cheat" and preserve your frequency by pushing on the group without opening yourself to push back, that's a good definition of manipulation, and when the group notices it will backfire.
I think it's important that "we" think carefully about what group we can really "we" with, without losing lock on reality, and updating in ways that shut out information rather than incorporating more information. And now that I think of it, the problem of how to scale up seems to be missing an ethical design pattern itself. There's not a lot of good guidance at how quickly to try to integrate with larger groups of metronomes.
Jordan Peterson took a crack at it with "Clean your room"/"Set your house in perfect order before you criticize the world", but that's more of a counter heuristic than a bridge. And overly moralistic and unachievable. In totally unrelated news, "Jordan Peterson" has become a scissor statement.
In short form, I'd probably phrase it like "Be careful to match your ambition with humility and courage, and scale only as fast as you dare"
and yet there’s hordes red in the face arguing the definition of “woman” not realizing semantics are used to disguise contested premises.
Notice how a straightforward request for a definition of the term in question is now a clear tribal indicator? The body language on both sides when that question is asked shows that no one is missing the implication that people are trying to disguise contested premises.
Hordes argue until they're red in the face because everyone knows that they're fighting a battle over smuggled premises. Compare to the lack of interest in how to categorize intersex people (at least, prior to this trans stuff becoming a big deal).
When it comes to commenting and deciding whether to intentionally try to preempt an obligation to respond, this "value investing" approach is to just explain your perspective without trying to thumb the scale in either direction. Rather than leaning on "And if you don't respond, it's because you know I'm right and can't admit it" or "It couldn't possibly be that, and surely your post is correct despite my criticisms", you can just make your comment trust the best estimate of the truth to win once things settle out. I think some of what looks like disingenuous "What? Just don't respond?" comes from this.
For example, if Alice is one of those you describe as "Innocent, but naive", she might comment on Bob's post without thinking much about what it might imply if Bob doesn't reply. "We'll get there when/if we get there". Because whether her comment is upvoted or downvoted changes the answer. Whether there are other comments along the same lines or in response to hers matters, as does what stance these comments take. Alice just trusts the audience and herself to make reasonable inferences, once the cards are on the table.
If Bob doesn't feel like "I don't have to respond, the audience will see that my point stands", then that points to Bob not having finished crossing that inferential distance and justifying his stance in the eyes of his audience. So long as Alice's object level counterargument is sincere, then avoiding making it to spare Bob the effort of explaining things isn't a good thing -- because now people still don't get it, and if they turn out to be correct it's because they're lucky that the holes they don't see don't turn out to matter. But Alice can't know whether they will or not until she see's Bob's response. Unless she happens to have special knowledge that the audience is likely to underestimate Bob's post relative to her comment, urging the audience to refrain from updating won't actually point towards truth -- because she doesn't actually know anything they don't about this.
I do actually agree that these responses are often disingenuous, because of a particular social dynamic where one can subtly and plausibly deniably posture as being above the author and win points when their arguments pan out, while being simultaneously shielded from the consequences when they turn out to be wrong (as an author, how comfortable do you feel calling people out for such posturing? How much do you trust the audience to do it for you?). In such cultures, the norms incentivize this sort of posturing so long as it can be kept below the audience's ability to recognize it or feel confident in diagnosing it. And if the audience doesn't see and grok this dynamic, then they're going to update on the information conveyed in the "confident" posture. And in such cultures the general audience doesn't grok this dynamic, or else they wouldn't enable it to persist.
Authors can sense this, and the situation where both the commenter and author know that the commenter is taking advantage of this hole in the collective epistemology but the commenter won't admit it, is disappointingly common. Less common on LW than elsewhere, but not absent.
At the end of the day, there are both situations where the author rationally and correctly sees "What? Just don't comment!" as disingenuous or naive, and also situations where it's just a fact of life that if you want your audience to understand you, you're going to have to respond to comments that convincingly-but-incorrectly argue against your post[1]. It's just a question of whether that comment putting pressure on the author to respond is (intentionally or unintentionally) relying on getting shielded from the consequences of coming at the author overconfidently, or whether they're making the comment holding themselves accountable.
(To be clear, I don't see this as a criticism that requires a response, as it doesn't negate the main thesis. I expect the audience to recognize this though, so I wouldn't ordinarily make a point of saying this)
Unless you happen to nail it on the first go, I guess, but doing that consistently is even harder
Actually, no. I wouldn't suggest you should do any of that. What I'm saying is purely descriptive.
This may sound like a nit, but I promise this is central to my point
I'd be surprised.
Not that I'd expect to disapprove, I just don't really think it's my place to do either. I tend to approach such things from a perspective of "Are you getting the results you want? If so, great. If not, let's examine why".
The fact that you're making this post suggests "not". I could reassure you that I don't think you did terribly, and I don't, but at the end of the day what's my hypothetical approval worth when it won't change the results?
I get that this might sound crazy from where you stand, but I don't actually see skepticism as a problem. I wouldn't try to route around it, nor would I try to assure anyone of anything.
I don't have to explore my mom's perspective or assure her of anything when I say crazy sounding stuff, because "He gets how this sounds, and has good reasons for his beliefs" is baked in. The reason I said I'd be curious to explore your mom's perspective is because of the "sounds crazy" objection, and the sense that "I know, right?" won't cut it. If I already understand her perspective well enough to navigate it without hiccup, then I don't need to explore it any more. I'm not going to plow forward if I anticipate that I'm going to be dismissed, so when that happens I know I've erred and need to reorient to the unexpected data. That's where the curiosity comes from.
The question of "How am I not coming off as obviously sane?" is much more important to me than avoiding stretching people's worldviews. Because when I come off as obviously sane, I can get away with a hell of a lot of stretching, and almost trivially. And when I don't, trying to route around that and convince people by "strategically withholding the beliefs I have which I don't see as believable" strikes me as fighting the current. Or, to switch metaphors, it's like fretting over excess weight of your toothbrush because lighter cargo is always easier, before fully updating on the fact that there are pickup trucks available so nothing needs to be backpacked in.
Projection onto "shoulds" is always a lossy process and I hesitate to do it at all, but if I were to do a little to make things a little more concretely actionable at the risk of incurring projection errors, it'd come out something like...
Things like "Acknowledge that I sound crazy when I sound crazy" and "Explore my moms perspective when I realize I don't understand her perspective well enough" don't need to be micromanaged, as they come naturally when we attend to the legitimacy of objections and insufficiency of our own understanding -- and I have no doubt that you do them already in the situations that you recognize as calling for them. That's why I wouldn't "should" at that level.