jimmy

Wiki Contributions

Comments

Sorted by
jimmy20

Just as explicit games have rules, normal conversation has all kinds of implicit expectations.

If someone asks me a question, I should answer.

 

No rules = no rule saying that you have to answer.

In fact, if someone says that they are curious about my reaction to something, it’s totally fine for me to just say “okay” and then change the topic to something else that feels more interesting to me.

That said, it is also okay for the other to get annoyed by that and say it, which they might or might not.

So then is circling just the voicing of the ever-present fact that you're free to violate social expectations if you're willing to annoy people? 

I understand and agree with the stuff about "when you don't take social expectations as binding that's simultaneously freeing and difficult", but that's already the choice you have. If circling doesn't include any rules against trying to enforce social expectations in the usual way, then it seems like circling can't change anything. Is it just the effects of making this fact common knowledge?

jimmy60

Here the two definitions of rationality diverge: believing the truth is now at odds with doing what works. It will obviously work better to believe what your friends and neighbors believe, so you won't be in arguments with them and they'll support you more when you need it.

 

This is only true if you can't figure out how to handle disagreements.

It will often be better to have wrong beliefs if it keeps you from acting on the even wronger belief that you must argue with everyone who disagrees. It's better yet to believe the truth on both fronts, and simply prioritize getting along when it is more important to get along.

 

If we had infinite cognitive capacity, we could just believe the truth while claiming to believe whatever works. And we could keep track of all of the evidence instead of picking and choosing which to attend to.

 

It's more fundamental than that.  The way you pick up a glass of water is by predicting that you will pick up a glass of water, and acting so as to minimize that prediction error. Motivated cognition is how we make things true, and we can't get rid of it except by ceasing to act on the environment -- and therefore ceasing to exist.

Motivated cognition causes no epistemic problem so long as we can realize our predictions. The tricky part comes when we struggle to fit the world to our beliefs. In these cases, there's an apparent tension between "believing the truth" and "working towards what we want". This is where all that sports stuff of "you have to believe you can win!" comes from, and the tendency to lose motivation once we realize we're not going to succeed.

If we try to predict that we will win the contest despite being down 6-0 and clearly less competent, we will either have to engage in the willful delusion of pretending we're not less competent and/or other things (which makes it harder to navigate reality, because we're using a false map and can't act so as to minimize the consequences of our flaws) or else we will just fail to predict success altogether and be unable to even try.

If instead, we don't predict anything about whether we will win or lose, and instead predict that we will play to the absolute best of our abilities, then we can find out whether we win or lose, and give ourselves room to be pleasantly surprised.

The solution isn't to "believe the truth" because the truth has not been set yet. The solution is to pay attention to our anticipated prediction errors, and shift to finer grain modeling when the expected error justifies the cost of thinking harder.

The only remedy I know of is to cultivate enjoying being wrong. This involves giving up a good bit of one's self-concept as a highly intelligent individual. This gets easier if you remember that everyone else is also doing their thinking with a monkey brain that can barely chin itself on rationality.

If you stop predicting "I am a highly intelligent individual, so I'm not wrong!", then you get to find out if you're a highly intelligent individual, as well as all of the things that may provide evidence in that direction (i.e. being wrong about things). This much is a subset of the solution I offer.

The next part is a bit trickier because of the question of what "cultivate enjoying being wrong" means, and how exactly you go about making sure you enjoy a fundamentally bad and unpleasant thing (not saying this is impossible, my two little girls are excited to get their flu shots today).

One way to attempt this is to predict "I am the kind of person who enjoys being wrong, because that means I get to learn [which puts me above the monkeys that can't even do this]", which is an improvement. If you do that, then you get to learn more things you're wrong about.... except when you're wrong about how much you enjoy being wrong -- which is certainly going to become a thing, when it matters to you most.

On top of that, the fact that it feels like "giving up" something and that it gets easier when you remember the grading curve suggests more vulnerabilities to motivated thinking, because there's still a potential truth being avoided ("I'm dumb on the scale that matters") and because switching to a model which yields strictly better results feels like losing something.

jimmy256

So far as I can tell, the common line that bear spray is more effective than firearms is based on an atrociously bad reading of the (limited) science, which is disavowed by the author of the studies. In short, successfully spraying a bear is more effective at driving off curious bears than simply having a firearm is are at stopping charging bears, but when you're comparing apples to apples then firearms are much more effective.

Here's a pretty good overview: https://www.outsideonline.com/2401248/does-bear-spray-work.  I haven't put a ton of work into verifying what he's claiming here, but it does match with the other data I've seen and I haven't seen anyone be nearly as careful and reach the opposite conclusion.

jimmy160

I'm the person JenniferRM mentioned. I'm also a physics guy, and got into studying/practicing hypnosis in ~2010/2011. I kinda moved on from "hypnosis" and drifted up the abstraction ladder, but still working on similar things and working on tying them together.

Anyway, here are my thoughts.

Suppose I really want her to be spinning clockwise in my mind. What might I do?

What worked for me is to focus on the foot alone and ignore the broader context so that I had a "clean slate" without "confirmatory experience" blocking my desired conclusion. When looking at the foot alone I experience it as oscillating rather than rotating (which I guess it technically is), and from there I can "release" it into whichever spin I intend by just kinda imagining that this is what's going on.

On the one hand, shifting intuitive models is surprisingly hard! You can’t necessarily just want to have a particular intuitive model, and voluntarily make that happen.

I actually disagree with this. It certainly seems hard, but the difficulty is largely illusory and pretty much disappears once you stop trying to walk through the wall and notice the front door.

The problem is that "wanting to have a particular model" isn't the thing that matters. You can want to have a particular model all you want, and you can even think the model is true all you want, but you're still talking about the statement itself not about the reality to which the statement refers. Even if you convince someone that their fear is irrational and they'd be better off not being scared, you've still only convinced them that their fear is irrational and they'd be better off not being scared. If you want to convince them that they are safe -- and therefore change their fear response itself -- then you need to convince them that they're safe. It's the difference between looking at yourself from the third person and judging whether your beliefs are correct or not, vs looking at the world from the first person and seeing what is there. If you want to change the third person perspective, then you can look at which models are desirable and why. If you want to change the first person models themselves, you have to look to the world and see what's there.

This doesn't really work with the spinning dancer because "Which way is the dancer spinning?" doesn't have an answer, but this is an artificial issue which doesn't exist in the real world. You still have to figure out "Is this safe enough to be worth doing?" and that's not always trivial, but the problem of "How do I change this irrational fear?" (for example) is. The answer is "By attending to the question of whether it is actually safe".

I don't deny that there's "skill" to it, but most of the skill IME is a meta skill of knowing what to even aim for rather than aiming well. Once you start attending to "Is it safe enough?", then when the answer is actually obvious the intuitive models just change. I can give a whole bunch of examples of this if you want, where people were stuck unable to change their responses and the problem just melts away with this redirection. Even stuff that you'd think would be resistant to change like physical pain can change essentially instantly. I've had it take as little as a single word.

 

Again we see that the subject is made to feel that his body is out of control, and becomes subject to a high-status person. Some hypnotists sit you down, ask you to stare upwards into their eyes and suggest that your eyelids are wanting to close—which works because looking upwards is tiring, and because staring up into a high-status person’s eyes makes you feel inferior.

This isn't exactly wrong, but I want to push back on the implication that this is the central or most important thing here.

The central thing, IMO, is a willingness to try on another person's worldview even though it clashes with your own. It doesn't require "inferiority"/"high status"/"control" except in the extremely minimal sense that they might know something important that you don't, and that seeing it for yourself might change your behavior. That alone will get you inhibition of all the normal stuff and an automatic (albeit tentative) acceptance of worldview-dissonant perspectives (e.g. name amnesia). It helps if the person has reason to respect and trust you which is kinda like "high status", but not really because it can just as easily happen with people on equal social standing in neutral contexts.

Similarly, hypnosis has very little to do with sleep and eye fatigue/closure is not the important part of eye contact. The important part of eye contact is that it's incredibly communicative. You can convey with eye contact things which you can't convey with words. "I see you". "Seeing you doesn't cause conflict in me". "I see you seeing me see you" and so on, to name a few. All the things you need to communicate to show someone that your perspective is safe and worthy of experiencing are best communicated with the eyes. And perhaps equally important it is a bid for attention, by holding your own.

So far, this isn’t a trance; I’m just describing a common social dynamic. Specifically, if I’m not in a hypnotic trance, the sequence of thoughts in the above might look like a three-step process:

[...]
i.e., in my intuitive model, first, the hypnotist exercises his free will with the intention of me standing; second, I (my homunculus) exercise my own free will with the intention of standing; and third, I actually stand. In this conceptualization, it’s my own free will / vitalistic force / wanting (§3.3.4) that causes me to stand. So this is not a trance.

It's important to note that while this self reflective narrative is indeed different in the way you describe, the underlying truth often is not. In the hypnosis literature this is known as "cold control theory", because it's the same control without the usual Higher Order Thoughts (HOT).

In "common social dynamics" we explain it as "I chose to", but what is actually happening a lot of the time is the speaker is exercising their free will through your body, and you're not objecting because it matches your narrative. The steps aren't actually in series, and you didn't choose to do it so much as you chose to not decline to do it.

These "higher order thoughts" do change some things, but turn out to be relatively unimportant and the better hypnotists usually don't bother too much with them and instead just address the object level. This is also why you get hypnotists writing books subtitled "there's no such thing as hypnosis" and stuff like that.

The short version is: If I have a tune in my head, then I’m very unlikely to simultaneously recall a memory of a different tune. Likewise, if I’m angry right now, then I’m less likely to recall past memories where I felt happy and forgiving, and vice-versa.

As far as I can tell, there are several different things going on with amnesia. I agree that this is one of them, and I'm not sure if I've seen anyone else notice this, so it's cool to see someone point it out.

The "null hypothesis", though, any time it comes to hypnosis is that it's all just response to suggestion. You "know" that being hypnotized involves amnesia, and you believe you're hypnotized, so you experience what you expect. There's an academic hypnosis researcher I talk to sometimes who doesn't even believe "hypnotic trance" is real in any fundamental sense and thinks that all the signs of trance are the result of suggestion.

I don't believe suggestion is all that's going on, but it really is sufficient for amnesia. The answer to Yudkowsky's old question of "Do we believe everything we're told?" is indeed "Yes" -- if we don't preemptively push it away or actively remember to unbelieve later. Back when I was working this stuff out I did a fun experiment where I'd come up with an excuse to get people to not pre-emptively reject what I was about to say, then I'd suggest amnesia for this conversation and that they'd laugh when I scratch my nose, and then I'd distract them so that the suggestion could take effect before they had a chance to unbelieve it. The excuse was something like "I know this is ridiculous so I don't expect you to believe it, but hear me out and let me know if you understand" -- which is tricky because they think the fact that we "agreed" that they won't believe it means they actually aren't believing it when they say "I understand", even though the full statement is "I understand [that I will laugh when you scratch your nose and have no idea why"]. They still had awareness that this belief is wrong and would therefore act to stop themselves from acting on it, which is why the unexpected distraction was necessary in order to get their mind off of it long enough for it to work.
 

jimmy21

If someone's only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick.

Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners' abuse? Does it even help the social circle around them?

 

If that's their only option, and the hostility in your telepathy is antisocial, then yes. In some cases though, people do have other options and their self-deception is offensive, so hostile telepathy is pro-social. 

For example, it would probably help those mothers if the men knew to anticipate punishment for not acknowledging their abuse of their partners. I bet at least one of those abusive husbands/boyfriends will give his side of the story that's a bit more favorable than "I'm a bad guy, lol", and that it will start to fall apart when pressed. In those cases, he'll have to choose between admitting wrongdoing or playing dumb, and people often do their best to play really dumb. The self-deception there is a ploy to steal someone else's second box, so fuck that guy.

I think the right response is to ignore the "self" part of the deception and treat it like any other deception. If it's okay to lie to the Nazis about hiding Jews, then it's okay to deceive yourself into believing it too. If we're going to make it against the law to lie under oath, then making it legal so long as they lie to themselves too is only going to increase the antisocial deception.

jimmy40

The reason I trust research in physics in general is that it doesn't end with publishing a paper. It often ends with building machines that depend on that research being right.

We don't just "trust the science" that light is a wave; we use microwave ovens at home.

 

Well said. I'm gonna have to steal that.

 

Therefore, in a world where we all do power poses all the time, and if you forget to do them, you will predictably fail the exam...

...well, actually that could just be a placebo effect.

Yeah, "Can I fail my exam" is a bad test, because when the test is "can I fail" then it's easy for the theory to be "wrong in the right way". GPS is a good test of GR because you just can't do it without a better understanding of spacetime so it has to at least get something right even if it's not the full picture. When you actually use the resulting technology in your day to day life and get results you couldn't have gotten before, then it almost doesn't matter what the scientific literature says, because "I would feel sorry for the good Lord. The theory is correct.". 

There are psychological equivalents of this, which rest on doing things that are simply beyond the abilities of people who lack this understanding. The "NLP fast phobia cure" is a perfect example of this, and I can provide citations if anyone is interested. I really get a kick out of the predictable arguments between those who "trust the science" but don't understand it, and those who actually do it on a regular basis.

(Something like seeing a black cat on your way to exam, freaking out about it, and failing to pay full attention to the exam.) Damn!

This reminds me of an amusing anecdote.

I had a weird experience once where I got my ankle sprained pretty bad and found myself simultaneously indignantly deciding that my ankle wasn't going to swell and also thinking I was crazy for feeling like swelling was a thing I could control -- and it didn't swell. I told my friend about this experience, and while she was skeptical and thought it sounded crazy, she tried it anyway and her next several injuries didn't swell.

Eventually she casually mentioned to someone "Nah, my broken thumb isn't going to swell because I decided not to", and the person she was talking to responded as if she had said something else because his brain just couldn't register what she actually said as a real possibility. She then got all self conscious about it and was kinda unintentionally gaslighted into feeling like she was crazy for thinking she could do that, and her thumb swelled up. 

I had to call her and remind her "No, you don't give up and expect it to swell because it 'sounds crazy', you intend for it to not swell anyway and find out whether it is something you can control or not". The swelling went back down most of the way after that, though not to the same degree as in the previous cases where the injury never swelled in the first place.

jimmy115

Can you come up with a better way of doing Psychology research?

 

Yes. More emphasis on concrete useful results, less emphasis on trying to find simple correlations in complex situations.

For example, "Do power poses work?". They did studies like this one where they tell people to hold a pose for five minutes while preparing for a fake job interview, and then found that the pretend employers pretended to hire them more often in the "power pose" condition. Even assuming there's a real effect where those students from that university actually impress those judges more when they pose powerfully ahead of time... does that really imply that power posing will help other people get real jobs and keep them past the first day? 

That's like studying "Are car brakes really necessary?" by setting up a short track and seeing if the people who run the "red light" progress towards their destination quicker. Contrast that with studying the cars and driving behaviors that win races, coming up with your theories, and testing them by trying to actually win races. You'll find out very quickly if your "brakes aren't needed" hypothesis is a scientific breakthrough or foolishly naive.

Instead of studying "Does CBT work?", study the results of individual therapists, see if you can figure out what the more successful ones are doing differently than the less successful ones, and see if you can use what you learn to increase the effectiveness of your own therapy or the therapy of your students. If the answer turns out to be "The successful therapists all power pose pre-session, then perform textbook CBT" and that allows you to make better therapists, great. If it's something else, then you get to focus on the things that actually show up in the data.

The results should speak for themselves. If they don't, and you aren't keeping in very close contact with real world results, then it's super easy to go astray with internal feedback loops because the loop that matters isn't closed.

jimmy96

Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. [...] Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

 

This assumes the median researchers can't recognize who the competent researchers are, or otherwise don't look to them as thought leaders.

I'm not arguing that this isn't often the case, just that it isn't always the case. In engineering, if you're more competent than everyone else, you can make cooler shit. If you're a median engineer trying to figure out which memes to take on and spread, you're going to be drawn to the work of the more competent engineers because it is visibly and obviously better.

In fields where distinguishing between bad research and good research has to be done by knowing how to do good research, rather than "does it fly or does it crash", then the problem you describe is much more difficult to avoid. I argue that the difference between the fields which replicate and those which don't is as much about the legibility of the end product as it is about the quality of the median researcher.

jimmy60

There's no norm saying you can't be ignorant of stats and read, or even post about things not requiring an understanding of stats, but there's still a critical mass of people who do understand the topic well enough to enforce norms against actively contributing with that illiteracy. (E.g. how do you expect it to go over if someone makes a post claiming that p=0.05 means that there's a 95% change that the hypothesis is true?)

Taking it a step further, I'd say my household "has norms which basically require everyone to speak English", but that doesn't mean the little one is quite there yet or that we're gonna boot her for not already meeting the bar. It just means that she has to work hard to learn how to talk if she wants to be part of what's going on.

Lesswrong feels like that to me in that I would feel comfortable posting about things which require statistical literacy to understand, knowing that engagement which fails to meet that bar will be downvoted rather than getting downvoted for expecting to find a statistically literate audience here.

jimmy20

I think this is correct as a conditional statement, but I don't think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.

 

It's not "attempting to price some externalities where many are difficult to price is generally bad", it's "attempting to price some externalities where the difficult to price externalities on the other side is bad". Sometimes the difficulty of pricing them means it's hard to know which side they primarily lie on, but not necessarily.

The direction of legible/illegible externalities might be uncorrelated on average, but that doesn't mean that ignoring the bigger piece of the pie isn't costly. If I offer "I'll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing",  you don't think "Well, the difficult part to price is a wash, but twenty dollars is twenty dollars"

you can just directly pay the person who stops the shooting,

You still need a body.

Sure, you can give people like Elisjsha Dicken a bunch of money, but that's because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can't simply go to the morgue and count how many people aren't there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it's still a potentially imperfect attempt to price the illegible and it's not a coincidence that this was left out of the initial analysis that I'm responding.

But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn't even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It's just not gonna work.

I don't agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example.

Sure, they'll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That's the stuff that by definition is going to be harder to recognize so you can't just say "all of the stuff I recognize is legible, therefore legible>>illegible".

For example, what's the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?

Load More