jimmy

Wiki Contributions

Comments

Sorted by
jimmy20

I'm the person JenniferRM mentioned. I'm also a physics guy, and got into studying/practicing hypnosis in ~2010/2011. I kinda moved on from "hypnosis" and drifted up the abstraction ladder, but still working on similar things and working on tying them together.

Anyway, here are my thoughts.

Suppose I really want her to be spinning clockwise in my mind. What might I do?

What worked for me is to focus on the foot alone and ignore the broader context so that I had a "clean slate" without "confirmatory experience" blocking my desired conclusion. When looking at the foot alone I experience it as oscillating rather than rotating (which I guess it technically is), and from there I can "release" it into whichever spin I intend by just kinda imagining that this is what's going on.

On the one hand, shifting intuitive models is surprisingly hard! You can’t necessarily just want to have a particular intuitive model, and voluntarily make that happen.

I actually disagree with this. It certainly seems hard, but the difficulty is largely illusory and pretty much disappears once you stop trying to walk through the wall and notice the front door.

The problem is that "wanting to have a particular model" isn't the thing that matters. You can want to have a particular model all you want, and you can even think the model is true all you want, but you're still talking about the statement itself not about the reality to which the statement refers. Even if you convince someone that their fear is irrational and they'd be better off not being scared, you've still only convinced them that their fear is irrational and they'd be better off not being scared. If you want to convince them that they are safe -- and therefore change their fear response itself -- then you need to convince them that they're safe. It's the difference between looking at yourself from the third person and judging whether your beliefs are correct or not, vs looking at the world from the first person and seeing what is there. If you want to change the third person perspective, then you can look at which models are desirable and why. If you want to change the first person models themselves, you have to look to the world and see what's there.

This doesn't really work with the spinning dancer because "Which way is the dancer spinning?" doesn't have an answer, but this is an artificial issue which doesn't exist in the real world. You still have to figure out "Is this safe enough to be worth doing?" and that's not always trivial, but the problem of "How do I change this irrational fear?" (for example) is. The answer is "By attending to the question of whether it is actually safe".

I don't deny that there's "skill" to it, but most of the skill IME is a meta skill of knowing what to even aim for rather than aiming well. Once you start attending to "Is it safe enough?", then when the answer is actually obvious the intuitive models just change. I can give a whole bunch of examples of this if you want, where people were stuck unable to change their responses and the problem just melts away with this redirection. Even stuff that you'd think would be resistant to change like physical pain can change essentially instantly. I've had it take as little as a single word.

 

Again we see that the subject is made to feel that his body is out of control, and becomes subject to a high-status person. Some hypnotists sit you down, ask you to stare upwards into their eyes and suggest that your eyelids are wanting to close—which works because looking upwards is tiring, and because staring up into a high-status person’s eyes makes you feel inferior.

This isn't exactly wrong, but I want to push back on the implication that this is the central or most important thing here.

The central thing, IMO, is a willingness to try on another person's worldview even though it clashes with your own. It doesn't require "inferiority"/"high status"/"control" except in the extremely minimal sense that they might know something important that you don't, and that seeing it for yourself might change your behavior. That alone will get you inhibition of all the normal stuff and an automatic (albeit tentative) acceptance of worldview-dissonant perspectives (e.g. name amnesia). It helps if the person has reason to respect and trust you which is kinda like "high status", but not really because it can just as easily happen with people on equal social standing in neutral contexts.

Similarly, hypnosis has very little to do with sleep and eye fatigue/closure is not the important part of eye contact. The important part of eye contact is that it's incredibly communicative. You can convey with eye contact things which you can't convey with words. "I see you". "Seeing you doesn't cause conflict in me". "I see you seeing me see you" and so on, to name a few. All the things you need to communicate to show someone that your perspective is safe and worthy of experiencing are best communicated with the eyes. And perhaps equally important it is a bid for attention, by holding your own.

So far, this isn’t a trance; I’m just describing a common social dynamic. Specifically, if I’m not in a hypnotic trance, the sequence of thoughts in the above might look like a three-step process:

[...]
i.e., in my intuitive model, first, the hypnotist exercises his free will with the intention of me standing; second, I (my homunculus) exercise my own free will with the intention of standing; and third, I actually stand. In this conceptualization, it’s my own free will / vitalistic force / wanting (§3.3.4) that causes me to stand. So this is not a trance.

It's important to note that while this self reflective narrative is indeed different in the way you describe, the underlying truth often is not. In the hypnosis literature this is known as "cold control theory", because it's the same control without the usual Higher Order Thoughts (HOT).

In "common social dynamics" we explain it as "I chose to", but what is actually happening a lot of the time is the speaker is exercising their free will through your body, and you're not objecting because it matches your narrative. The steps aren't actually in series, and you didn't choose to do it so much as you chose to not decline to do it.

These "higher order thoughts" do change some things, but turn out to be relatively unimportant and the better hypnotists usually don't bother too much with them and instead just address the object level. This is also why you get hypnotists writing books subtitled "there's no such thing as hypnosis" and stuff like that.

The short version is: If I have a tune in my head, then I’m very unlikely to simultaneously recall a memory of a different tune. Likewise, if I’m angry right now, then I’m less likely to recall past memories where I felt happy and forgiving, and vice-versa.

As far as I can tell, there are several different things going on with amnesia. I agree that this is one of them, and I'm not sure if I've seen anyone else notice this, so it's cool to see someone point it out.

The "null hypothesis", though, any time it comes to hypnosis is that it's all just response to suggestion. You "know" that being hypnotized involves amnesia, and you believe you're hypnotized, so you experience what you expect. There's an academic hypnosis researcher I talk to sometimes who doesn't even believe "hypnotic trance" is real in any fundamental sense and thinks that all the signs of trance are the result of suggestion.

I don't believe suggestion is all that's going on, but it really is sufficient for amnesia. The answer to Yudkowsky's old question of "Do we believe everything we're told?" is indeed "Yes" -- if we don't preemptively push it away or actively remember to unbelieve later. Back when I was working this stuff out I did a fun experiment where I'd come up with an excuse to get people to not pre-emptively reject what I was about to say, then I'd suggest amnesia for this conversation and that they'd laugh when I scratch my nose, and then I'd distract them so that the suggestion could take effect before they had a chance to unbelieve it. The excuse was something like "I know this is ridiculous so I don't expect you to believe it, but hear me out and let me know if you understand" -- which is tricky because they think the fact that we "agreed" that they won't believe it means they actually aren't believing it when they say "I understand", even though the full statement is "I understand [that I will laugh when you scratch your nose and have no idea why"]. They still had awareness that this belief is wrong and would therefore act to stop themselves from acting on it, which is why the unexpected distraction was necessary in order to get their mind off of it long enough for it to work.
 

jimmy20

If someone's only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick.

Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners' abuse? Does it even help the social circle around them?

 

If that's their only option, and the hostility in your telepathy is antisocial, then yes. In some cases though, people do have other options and their self-deception is offensive, so hostile telepathy is pro-social. 

For example, it would probably help those mothers if the men knew to anticipate punishment for not acknowledging their abuse of their partners. I bet at least one of those abusive husbands/boyfriends will give his side of the story that's a bit more favorable than "I'm a bad guy, lol", and that it will start to fall apart when pressed. In those cases, he'll have to choose between admitting wrongdoing or playing dumb, and people often do their best to play really dumb. The self-deception there is a ploy to steal someone else's second box, so fuck that guy.

I think the right response is to ignore the "self" part of the deception and treat it like any other deception. If it's okay to lie to the Nazis about hiding Jews, then it's okay to deceive yourself into believing it too. If we're going to make it against the law to lie under oath, then making it legal so long as they lie to themselves too is only going to increase the antisocial deception.

jimmy40

The reason I trust research in physics in general is that it doesn't end with publishing a paper. It often ends with building machines that depend on that research being right.

We don't just "trust the science" that light is a wave; we use microwave ovens at home.

 

Well said. I'm gonna have to steal that.

 

Therefore, in a world where we all do power poses all the time, and if you forget to do them, you will predictably fail the exam...

...well, actually that could just be a placebo effect.

Yeah, "Can I fail my exam" is a bad test, because when the test is "can I fail" then it's easy for the theory to be "wrong in the right way". GPS is a good test of GR because you just can't do it without a better understanding of spacetime so it has to at least get something right even if it's not the full picture. When you actually use the resulting technology in your day to day life and get results you couldn't have gotten before, then it almost doesn't matter what the scientific literature says, because "I would feel sorry for the good Lord. The theory is correct.". 

There are psychological equivalents of this, which rest on doing things that are simply beyond the abilities of people who lack this understanding. The "NLP fast phobia cure" is a perfect example of this, and I can provide citations if anyone is interested. I really get a kick out of the predictable arguments between those who "trust the science" but don't understand it, and those who actually do it on a regular basis.

(Something like seeing a black cat on your way to exam, freaking out about it, and failing to pay full attention to the exam.) Damn!

This reminds me of an amusing anecdote.

I had a weird experience once where I got my ankle sprained pretty bad and found myself simultaneously indignantly deciding that my ankle wasn't going to swell and also thinking I was crazy for feeling like swelling was a thing I could control -- and it didn't swell. I told my friend about this experience, and while she was skeptical and thought it sounded crazy, she tried it anyway and her next several injuries didn't swell.

Eventually she casually mentioned to someone "Nah, my broken thumb isn't going to swell because I decided not to", and the person she was talking to responded as if she had said something else because his brain just couldn't register what she actually said as a real possibility. She then got all self conscious about it and was kinda unintentionally gaslighted into feeling like she was crazy for thinking she could do that, and her thumb swelled up. 

I had to call her and remind her "No, you don't give up and expect it to swell because it 'sounds crazy', you intend for it to not swell anyway and find out whether it is something you can control or not". The swelling went back down most of the way after that, though not to the same degree as in the previous cases where the injury never swelled in the first place.

jimmy115

Can you come up with a better way of doing Psychology research?

 

Yes. More emphasis on concrete useful results, less emphasis on trying to find simple correlations in complex situations.

For example, "Do power poses work?". They did studies like this one where they tell people to hold a pose for five minutes while preparing for a fake job interview, and then found that the pretend employers pretended to hire them more often in the "power pose" condition. Even assuming there's a real effect where those students from that university actually impress those judges more when they pose powerfully ahead of time... does that really imply that power posing will help other people get real jobs and keep them past the first day? 

That's like studying "Are car brakes really necessary?" by setting up a short track and seeing if the people who run the "red light" progress towards their destination quicker. Contrast that with studying the cars and driving behaviors that win races, coming up with your theories, and testing them by trying to actually win races. You'll find out very quickly if your "brakes aren't needed" hypothesis is a scientific breakthrough or foolishly naive.

Instead of studying "Does CBT work?", study the results of individual therapists, see if you can figure out what the more successful ones are doing differently than the less successful ones, and see if you can use what you learn to increase the effectiveness of your own therapy or the therapy of your students. If the answer turns out to be "The successful therapists all power pose pre-session, then perform textbook CBT" and that allows you to make better therapists, great. If it's something else, then you get to focus on the things that actually show up in the data.

The results should speak for themselves. If they don't, and you aren't keeping in very close contact with real world results, then it's super easy to go astray with internal feedback loops because the loop that matters isn't closed.

jimmy96

Claim: memeticity in a scientific field is mostly determined, not by the most competent researchers in the field, but instead by roughly-median researchers. [...] Sure, the most competent people in the field may recognize the problems, but the median researchers don’t, and in aggregate it’s mostly the median researchers who spread the memes.

 

This assumes the median researchers can't recognize who the competent researchers are, or otherwise don't look to them as thought leaders.

I'm not arguing that this isn't often the case, just that it isn't always the case. In engineering, if you're more competent than everyone else, you can make cooler shit. If you're a median engineer trying to figure out which memes to take on and spread, you're going to be drawn to the work of the more competent engineers because it is visibly and obviously better.

In fields where distinguishing between bad research and good research has to be done by knowing how to do good research, rather than "does it fly or does it crash", then the problem you describe is much more difficult to avoid. I argue that the difference between the fields which replicate and those which don't is as much about the legibility of the end product as it is about the quality of the median researcher.

jimmy60

There's no norm saying you can't be ignorant of stats and read, or even post about things not requiring an understanding of stats, but there's still a critical mass of people who do understand the topic well enough to enforce norms against actively contributing with that illiteracy. (E.g. how do you expect it to go over if someone makes a post claiming that p=0.05 means that there's a 95% change that the hypothesis is true?)

Taking it a step further, I'd say my household "has norms which basically require everyone to speak English", but that doesn't mean the little one is quite there yet or that we're gonna boot her for not already meeting the bar. It just means that she has to work hard to learn how to talk if she wants to be part of what's going on.

Lesswrong feels like that to me in that I would feel comfortable posting about things which require statistical literacy to understand, knowing that engagement which fails to meet that bar will be downvoted rather than getting downvoted for expecting to find a statistically literate audience here.

jimmy20

I think this is correct as a conditional statement, but I don't think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.

 

It's not "attempting to price some externalities where many are difficult to price is generally bad", it's "attempting to price some externalities where the difficult to price externalities on the other side is bad". Sometimes the difficulty of pricing them means it's hard to know which side they primarily lie on, but not necessarily.

The direction of legible/illegible externalities might be uncorrelated on average, but that doesn't mean that ignoring the bigger piece of the pie isn't costly. If I offer "I'll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing",  you don't think "Well, the difficult part to price is a wash, but twenty dollars is twenty dollars"

you can just directly pay the person who stops the shooting,

You still need a body.

Sure, you can give people like Elisjsha Dicken a bunch of money, but that's because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can't simply go to the morgue and count how many people aren't there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it's still a potentially imperfect attempt to price the illegible and it's not a coincidence that this was left out of the initial analysis that I'm responding.

But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn't even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It's just not gonna work.

I don't agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example.

Sure, they'll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That's the stuff that by definition is going to be harder to recognize so you can't just say "all of the stuff I recognize is legible, therefore legible>>illegible".

For example, what's the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?

jimmy40

I think my main point would be that Coase's theorem is great for profitable actions with externalities, but doesn't really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate. 

 

This brings up another important point which is that a lot of externalities are impossible to calculate,  and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.

As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out. It's really hard to measure the number of murders which didn't happen because the guns you sold deterred the attacks. And if we accept the pro 2A arguments that the real advantage of an armed populace is that it prevents tyranny, that's even harder to put a real number on.

I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it's even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.

jimmy3621

The frustrating thing about the discussion about the origins is that people seldom show recognition of the priorities here, and all get lost in the weeds.

You can get n layers deep into the details, and if the bottom is at n+1 you're fucked. To give an example I see people talking about with this debate, "The lab was working on doing gain of function to coronaviruses just like this!" sounds pretty damning but "actually the grant was denied, do you think they'd be working on it in secret after they were denied funding?" completely reverses it. Then after the debate, "Actually, labs frequently write grant proposals for work they've already done, and frequently are years behind in publishing" reverses it again. Even if there's an odd number of remaining counters, the debate doesn't demonstrate it. If you're not really really careful about this stuff, it's very easy to get lost and not realize where you've overextended on shaky ground.

Scott talks about how Saar is much more careful about these "out of model" possibilities and feels ripped off because his opponent wasn't, but at least judging from Scott's summary it doesn't appear he really hammered on what the issue is here and how to address it.

Elsewhere in the comments here Saar is criticized for failing to fact check the dead cat thing, and I think that's a good example of the issue here. It's not that any individual thing is too difficult to fact check, it's that when all the evidence is pointing in one direction (so far as you can tell) then you don't really have a reason to fact check every little thing that makes total sense so of course you're likely to not do it. If someone argues that clay bricks weigh less than an ounce, you're going to weigh the first brick you see to prove them wrong, and you're not going to break it open to confirm that it's not secretly filled with something other than clay. And if it turns out it is, that doesn't actually matter because your belief didn't hinge on this particular brick being clay in the first place.

If it turns out that a lot of your predictions turn out to be based on false presuppositions, this might be an issue. If it turns out the trend you based your perspective on just isn't there, then yeah that's a problem. But if that's not actually the evidence that formed your beliefs, and they're just tentative predictions that aren't required by your belief under question, then it means much less. Doubly so if we're at "there exists a seemingly compelling counterargument" and not "we've gotten to the bottom of this, and there are no more seemingly compelling counter-counterarguments".

So Saar didn't check if the grant was actually approved. And Peter didn't check if labs sometimes do the work before writing grant proposals. Or they did, and it didn't come through in the debate. And Saar missed the cat thing. Peter did better on this game of "whack-a-mole" of arguments than Saar did, and more than I expected, but what is it worth? Truth certainly makes this easier, but so does preparation and debate skill, so I'm not really sure how much to update here.


What I want to see more than "who can paint an excessively detailed story that doesn't really matter and have it stand up to surface level scrutiny better", is people focusing on the actual cruxes underlying their views. Forget the myriad of implications n steps down the road which we don't have the ability to fully map out and verify, what are the first few things we can actually know, and what can we learn from this by itself? If we're talking about a controversial "relationship guru", postpone discussions of whether clips were "taken out of context" and what context might be necessary until we settle whether this person is on their first marriage or fifth. If we're wondering if a suspect is guilty of murder, don't even bother looking into the credibility of the witness until you've settled the question of does the DNA match.

If there appears to be a novel coronavirus outbreak right outside a lab studying novel coronaviruses, is that actually the case? Do we even need to look at anything else, and can looking at anything else even change the answer?

To exaggerate the point to highlight the issue, if there were unambiguously a million wet markets that are all equivalent, and one lab, and the outbreak were to happen right between the lab and the nearest wet market, you're done. It doesn't matter how much you think the virus "doesn't look engineered" because you can't get to a million to one that way. Even if you somehow manage to make what you think is a 1000:1 case, a) even if your analysis is sound it still came from the lab, b) either your analysis there or the million to one starting premise is flawed. And if we're looking for a flaw in our analyses, it's going to be a lot easier to find flaws in something relatively concrete like "there are a million wet markets just like this one" than whatever is going into arguing that it "looks natural".

So I really wish they'd sit down and hammer out the most significant and easiest to verify bits first. How many equally risky wet markets are there? How many labs? What is the quantitative strength of the 30,000 foot view "It looks like an outbreak of chocolatey goodness in Hershey Pennsylvania"? What does it actually take to have arguments that contain leaks to this degree, and can we realistically demonstrate that here?
 

jimmy20

The difference between what I strive for (and would advocate) and "epistemic learned helplessness" is that it's not helpless. I do trust myself to figure out the answers to these kinds of things when I need to -- or at least, to be able to come to a perspective that is worth contending with.

The solution I'm pointing at is simply humility. If you pretend that you know things you don't know, you're setting yourself up for failure. If you don't wanna say "I dunno, maybe" and can't say "Definitely not, and here's why" (or "That's irrelevant and here's why" or "Probably not, and here's why I suspect this despite not having dived into the details"), then you were committing arrogance by getting into a "debate" in the first place.

Easier said than done, of course.

Load More