My "c'mon guys" here is not "c'mon the empirical evidence here is overwhelming." It's more like "look, which world do you actually expect to result in you making better decisions faster: the one where you spend >0 days on testing and reflecting on your thinking in areas where there is real feedback, or the one where you just spend all your time on 'object level work' that doesn't really have the ability to tell you you were wrong?".
(and, a host of similar questions, with the meta question is "do you really expect the optimal thing here to be zero effort on metacognition practice of some kind?")
I mostly agree in general and I feel ya on the "c'mon guys" thing, yet I don't do my own separate "rationality practice".
For me, it's basically the same reason why I don't spend much time in a weight room anymore; I prefer to keep my strength by doing things that require and use strength. I'm not against weight lifting in principle, and I've done a decent amount of it. It's just that when I have a choice between "exercise muscles for the sake of exercising muscles" and "exercise muscles in the process of doing something else I want to do anyway", the latter is a pure win if the exercise is anywhere near equivalent. Not only is it "two birds with one stone", it also streamlines the process of making sure you're training the right muscles for the uses you actually have, and streamlines the process of maintaining motivation with proof that it is concretely useful.
The option isn't always available, obviously. If your object level work doesn't have good feedback, or you're not strong enough to do your job, then specific training absolutely makes sense. Personally though, I find more than enough opportunities to work on meta cognition as applied to actual things I am doing for object level reasons.
The thing that seems more important to me isn't whether you're doing a separate practice for the sake of learning, but whether you're reflecting on your thinking in areas where there's real feedback, and you're noticing that feedback. I do think there's a place for working on artificial problems, but I also think there's an under recognized place for picking the right real world problems for your current ability level with an expectation of learning to level up. And an underappreciated skill in finding feedback on less legible problems.
1) Yes and no, depending on what you mean by "real thing".
The Oxford Handbook of Hypnosis is a giant tome of scientific knowledge on "hypnosis"; none of which suggests that it's not real. Hypnotists really can do seemingly wild shit that most people cannot do. Most hypnotherapists like to say "It's not mind control like Hollywood depicts", but even that is only partially true. The lawyer Michael Fine used hypnosis to sexually assault his clients and give them amnesia for it, and he is in prison now only because he was dumb enough about it that his victims were able to notice that they were missing memory and that things didn't add up. I've talked to victims of more clever hypnotists who haven't gotten caught.
At the same time, there's a reason hypnotists tend to write book with subtitles like "there's no such thing as hypnosis". I'd argue that it's more accurate to say "there's no such thing as not-hypnosis", but neither really conveys an accurate understanding. The bottom line is that hypnosis isn't what it appears to be, because "not hypnosis" isn't what it appears to be, and once you get familiar with how to do hypnosis and see all the gray area between the black and white, the term kinda loses meaning. Competent hypnotists have a strong tendency to drop all the formalisms, and the most competent have a tendency to stop seeing what they do as "hypnosis" -- at least, in my judge of competence as someone who also doesn't see what I do as "hypnosis".
2) Yes and no. I've gotten some interesting results using "self hypnosis". One that stands out is using self hypnosis to "be comfortable" when I was seasick on a rocking boat one night. It worked, and I got comfortable -- only to feel myself about to vomit anyway. Careful what you wish for.
The hard part isn't "Can hypnosis be used to get my brain to believe X", it's what's true? What's worth doing? Are things going well, relative to the relevant expectations? The more you try to bullshit yourself, the more you'll a) have unintended effects if you succeed, or b) foresee this and find it hard to get yourself to do self hypnosis. The more you see clearly what the right answer is, the less you'll need hypnosis in the first place. The real value in learning hypnosis is as a proof of concept that allows you to see when you're BSing yourself so you can stop that.
Due to the counter-intuitiveness and subtleties here, it's hard to give a less cryptic short answer. I'm actually finishing up a ~20 post 65k word sequence on essentially this exact question. It's about what I've learned about psychology and rationality as a result of picking up hypnosis in 2010. It will give concrete and actionable answers to what you're looking for here, as well as the underlying justification. I have a draft done and basically waiting for a proof reader to make it through, then I'll start posting.
There are several complications in the example you give, and I'm not sure which are intentional.
Let's start with a simpler example. You somehow end up needing to take a 400 meter shot down a tunnel with an experimental rifle/ammo you've been working on. You know the rifle/ammo inside and out due to your work and there is no wind, but the rifle/ammo combination has very high normal dispersion, and all that is exposed is a headshot.
In this case, where you center your probability distribution depends on the value of the kids life. If the terrorist is about to nuke the whole earth, you center it on the bad guy and ignore the kid. If the terrorist will at most kill that one kid if you don't kill him now, then you maximize expected value by biasing your distribution so that hitting the kid requires you to be further down the tail, and the ratio of terrorist/hostage hit goes up as the chance of a hit goes down. If the kid certainly dies if you miss, also dies if you hit him, and is only spared if you hit the terrorist, then you're back to ignoring the kid and centering the distribution on the terrorist -- even if you're more likely to hit the kid than the terrorist.
In the scenario you describe, you don't actually have the situation so well characterized. You'd be forced to lob bullets at a twenty degree inclination, without being able to use sights or see your target -- among many other large uncertainties. In that case it's not that you have a well known distribution and unknown result of the next draw. You don't know what the distribution is. You don't know what the meta distribution the distribution is being drawn from.
Statements like "more likely" are all relative to a model which you presuppose has some validity in context. What's the model, and where do you think you're getting the validity to say it? Even if the simulation God paused the game and spoke to you saying "I'll run the experiment a billion times, and we'll see if the kid gets hit more often", you'd have no idea how to set up that experiment because you don't know what you're controlling for.
I'd guess that you're asking about something in between, but I'm not sure which unknowns are intentional.
No, that does not sound like a fair characterization. My claims are cover a lot more than "it doesn't always happen" and yours sure don't seem limited to "it doesn't never happen".
Here's the motivating question for this whole essay:
You asked why people who "believe in" avoiding nonmarital sex so frequently engage in and report badly regretting it
and here's part of your conclusion
At this point the behavior you describe should no longer be perplexing.
You're talking about this as if it needs falsification of preferences to explain and my stance is that no, this is default. Any time people have to face anything as complex as sexuality, even if people are doing their best to pro-socially guide people this is necessarily what's going to happen. Perversions can sneak in too, and I don't deny that they exist, but postulating perversions is absolutely not needed in order to explain the data you're seeking to explain.
To narrow things down a bit, we can return to the original comment:
Sometimes people profess or try to reveal a preference for X, as a response to coercive pressures that are specifically motivated by prior underlying preferences for anti-X. This is what I'm calling preference inversion.
I don't disagree with this.
My intuition is that generally, upon reflection, people would prefer to satisfy their and others' preferences as calculated prior to such influences. I don't know whether there are other sorts of analogous distorting factors nearly all reasonable people would not like to satisfy upon reflection, but in general, I'm using the term "intrinsic preferences" to refer to whatever's left over after all such generally appealing adjustments.
It's this second part I was taking issue with.
Here, you're talking about what generally happens, not what "sometimes" happens, and I don't think "intrinsic preferences" is defined well enough to do what you want it to do here. I don't think it can be, unless you introduce more concepts, because I don't think "external vs intrinsic" can do justice to this multidimensional space no matter how you cleave it.
Part of this is because what counts as "external" cannot be well defined. If daddy yells at me to not drink, that sounds external, and my revealed preferences are likely to revert when he's not looking. But maybe being a reasonable person, upon reflection I'd agree with him. Does that make it "not a preference inversion"? If my boss threatens to fire me if I show up drunk, that sounds external too. But that's not very different than my boss reminding me that he can only afford to hire productive people -- and that's starting to sound like "just reality". Certainly if a doctor tells me that my liver is failing, that sounds like "just reality" and "internal". But it's external to my brain, and maybe if someone offers me an artificial liver I'd revert to my "intrinsic preferences"?
Our preferences necessarily depend on the reality we find ourselves embedded in, and cannot exist in isolation except perhaps in the highest abstraction (e.g. "I prefer to continue existing" or something), so the concept of "intrinsic preferences" for concrete things necessarily falls apart. What doesn't fall apart is the structure of incoherence in our own preferences.
We're constantly trying to shape and reshape the reality that others live in such that their revealed preferences given this reality satisfy our own. Part of this is making laws forbidding theft, how we indoctrinate in church, our hiring and firing decisions, how we inform our friends, etc. Some of these actions are purely cooperative, others are pure defections, and many are somewhere in between. Often we have fairly superficial pressure applied which results in fairly superficial changes in revealed preferences which easily revert, but that superficiality is fundamentally a property of the person containing the preferences not the person applying the pressures. There is indeed skill in facilitating deeper shifts in preferences to better match reality, and this is indeed a good thing to pursue, but the "intrinsic vs external" binary obscures the interplay between shifting reality, shifting perceptions of reality, and shifting preferences -- and therefore most of what is going on.
To use your example, the positive value of marital intimacy is inherently intertwined with the power of sexuality, the importance of getting sexuality right, and therefore the badness of sexuality done inappropriately. There is all sorts of room for this guidance to be given skillfully or clumsily, purely or corruptly, for it to be received coherently or superficially, in concordance with reality or not, and everywhere in between. Like you've noticed, there isn't always a legible distinction between the conventional conservative Christians who pull this off well and those that do more poorly.
My own perception, is that almost none of our preferences can be cleanly described as "intrinsic" or "externally pressured", or as "valid" or "invalid". There's just differing degrees of coherence and differing degrees of fit to reality. The average case of conventional conservative Christians pushing against non-marital sex, and the average case of the person "believing in" and regretting not living by their "beliefs", is in between the picture Christianity portrays, and the one you portray of falsified preferences. Because the ground truth is in between "nonmarital sex is always bad" and "nonmarital sex is always as good as it seems".
Generally, when I interact with people on the topic of sexuality, I see people who don't know what their preferences resolve to with regards to non-marital sex -- and whose genuine preferences would resolve in different ways depending on the culture they're embedded within and the opportunities they have. I could sell either picture, and make it look "intrinsic", if I'm willing to sweep the right things under the rug in order to do so. Most people's belief structures surrounding sex (and most things) are shoddily built. I could argue for their destruction, and destroy them. I could argue for their utility, and preserve them. The optimal solution necessarily involves seeing both the utility and imperfections, both a degree of destruction and of reconstruction.
Like you said, this isn't just theoretical. This is a thing I've actually done when it has come up. I can give examples if it'd help
Agreed in full
The problem there isn't the Econ-101, it's the fool in the arm chair.
You can't just say "I have a simple armchair argument that no one could ever demand sexual favors", because that's not even a valid prediction of Econ-101. Maybe the person does want to provide sexual favors. Maybe they even want to provide sexual favors and then also claim purity and victimhood status to gullible people. That's entirely consistent with Econ-101.
Or maybe they aren't productive enough to earn their wage otherwise, and their job is better conceptualized as half prostitute. That's also entirely consistent with Econ-101.
If we have situations that look like "This person didn't want it"+"this person is productive enough to earn their wage", then if we also have Econ-101 we notice a contradiction. We can't just assume the bottom line that Econ-101 is somehow wrong without finding an identifiable error and be justified in our assumptions. Neither can we assume people will necessarily do what's in their best interest and assume "This person wanted it, actually", without finding an identifiable error in the perception that they didn't.
There's an actual puzzle to be solved here, and we can't write the bottom line first and also get to the right answer on anything but chance.
I agree that there is a meaningful difference, but I disagree that they're so cleanly separable that we can say that it is one or the other.
I don't teach my kid that sugar is evil and I give her the chance to learn how much sugar she wants for herself. I try to minimize coercion because it impairs learning, and I want my kid to actually integrate the information so that she can make coherent rather than fractured decisions.
At the same time, I want to protect her from things that are beyond her capability to handle and learn from. We don't want our children to grow up with sexual shame that continues into marriage, but if the kindergarten teacher starts teaching kids about how great sex is and offering to show them, then do you take a stance of "well, I don't want my five year old to think sex is bad..." or do you say "Absolutely not."?
Information sharing and force are both useful tools, and while it's better to lean on the former as much as possible it is important to be able to fall back on the latter. People just don't have a good idea of how to do the former (and are kinda 'sinful' themselves) so they over-rely on the latter.
Using force (including social shame) is a symmetric weapon so it is more easily (even unintentionally) corrupted into serving less pure motivations, but it also serves pure motivations when necessary.
The question of "Does the pressure help people better achieve their other goals, or create persistent internal conflicts?" is important, but messy.
Which people? Which pressure? If I know two people who grew up in Christian households, and one of them grew up in a strict household, married a virgin and is happy and without sexual shame, and another grew up in a less strict household and had premarital sex but felt bad about it, then how do we judge Christianities "anti sex" norms here?
I'd say we can notice which are more effective at bringing about good outcomes, and which have more pure intent and are heavier on the information to pressure ratio. But we cannot separate them. I know some people who absolutely reject the pressure -- and then come to learn on their own the value it was pointing at -- and other people who are handled delicately with pure information and then shame themselves for not learning to like sweets in the optimal way instantly.
It's kinda a mess.
You're arguing that attempts to decrease candy consumption are coercive rather than informative, and are in ways counterproductive. I agree with this. You take this to mean it's not a "good faith attempt", but as a general rule people don't know how to do any better than this.
It's true that people can appeal to "sinfully delicious" to sell you their dessert, but why don't broccoli salesmen do the same? Why not toothbrush salesmen? If "Sinful" means "good", actually, and it originates with salesmen, then why isn't everything "sinful"?
The answer is that it didn't originate with salesmen. Dessert salesmen are leaning on the preexisting "Anything that feels this good must be a sin", so the question is where that came from. One obvious explanation is that things that feel that good tend to be pursued a lot, and there are contexts in which those pursuits are less desirable than it may seem.
I do withhold sweets (and television) when I have the intuition that he's asking for them for the wrong reasons, in a confused way, and won't either get what he wants from them or learn efficiently from the experience.
Even you notice that he will ask for sweets for the wrong reasons and that you don't always expect him to learn efficiently from experience. That's where the pressure to coerce your kid into eating less sweets comes from.
You're smarter and wiser than most, and so you're able to teach your kid these things more effectively and with transmission of neuroses, and that's great. I try at that as well, and have noticed some of the same things (though not all; I'll have to play with the 'appetizer' bit).
I'm not arguing that the things you're pointing at don't exist, just pointing at the fact that people don't know how to do any better. We can flip the sign on this and look at how people handle teaching their kids about getting their shots at the doctor. People want their kids to be okay with it because it's an "anti-sin" in that it in reality it is better than it feels. That's why they try to tell kids "It's okay! It just feels like a little pinch!"
And these attempts are equally counterproductive, because as a general rule people don't know how to avoid teaching their own neuroses. I told my two year old that shots are bad and scary and that I was too scared so she needed to go first. She had fun showing me how to be brave, and only cried when she couldn't get a second shot.
The next year, she watched some cartoon made by incompetent but well meaning people that was aimed at showing kids that shots are okay, and relearned a fear of needles. Because all these people know how to teach is their own perspective, and that perspective is "Needles are terrifying but we mustn't admit it because we need to get our shots". So I had to start over.
As a society we notice things. We just suck at teaching them, and even our most good faith attempts are still counterproductively coercive and lacking in actionable information.
Continuing the example with sweets, I estimate my terminal goals to include both "not be ill e.g. with diabetes" and "eat tasty things".
That sounds basically right to me, which is why I put effort into learning (and teaching) to enjoy the right things. I'm pretty proud of the fact that both my little girls like "liver treats".
Technology and other progress has two general directions: a) more power for those who are able to wield it; b) increasing forgiveness, distance to failure. For some reason, I thought that b) was a given at least on average.
I think that's right, but also "more distance to failure" doesn't help so much if you use your newfangled automobile to cover that distance more quickly. It's easier to avoid failure, but also easier to fail. A gun makes it easier to defend yourself, and also requires you to grow up until you can make those calls correctly one hundred percent of the time. With great power comes great responsibility, and all that.
I'll take the car, and the gun, and the society that trusts people with cars and guns and other technologically enabled freedoms. But only because I think we can aspire to such responsibilities, and notice when they're not met. All the enabling with none of the sobering fear of recklessness isn't a combination I'm a fan of.
With respect to the "why do you believe this" question on my previous comment about promiscuity being statistically linked with marital dissatisfaction, I'm not very good at keeping citations on hand so I can't tell you which studies I've seen, but here's what chatgpt found for me when I asked for studies on the correlation.
https://www.jstor.org/stable/3600089
https://unews.utah.edu/u-researcher-more-sex-partners-before-marriage-doesnt-necessarily-lead-to-divorce/
https://ifstudies.org/blog/testing-common-theories-on-the-relationship-between-premarital-sex-and-marital-stability
https://www.proquest.com/openview/46b66af73b830380aca0e6fbc3b597e3/1
I don't actually lean that hard on the empirical regularity though, because such things are complicated and messy (e.g. the example I gave of a man with a relatively high partner count succeeding because he took an anti-promiscuous stance). The main reason I believe that pills don't remove all the costs of promiscuity is that I can see some of the causal factors at work and have experience actually working with them to help women land happy stable relationships.
"Can do some impressive things, but struggles with basic arithmetic and likes to make stuff up" is such a fitting description of humans that I was quite surprised when it turned out to be true of LLMs too.
Whenever I see a someone claim that it means LLM can't "understand" something, I find it quite amusing that they're almost demonstrating their own point; just not in the way they think they are.