LESSWRONG
LW

2138
Dweomite
1330Ω-1403130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
Obligated to Respond
Dweomite12h119

Thoughts that occurred...

  1. There is a cognitive cost associated with tracking echoes, which increases the more you track
    1. Expectations about how many echoes you track are at least partly a negotiation over how labor should be distributed; e.g. am I responsible for mitigating the emotional damage that I take from your opinions, or are you?
  2. Skills (and related issues) make this cost higher for some people and lower for others
    1. People may have misunderstandings about what costs others are actually paying, or being expected to pay
  3. The ability to predict echoes can be used in a friendly way (e.g. to make the conversation more comfortable for the other person) but can also be used in an unfriendly way (e.g. to manipulate them, such as in the example of asking parents if a friend can stay, in front of that friend)
Reply
Obligated to Respond
Dweomite13h72

This reminds me of Social status part 1/2: negotiations over object-level preferences, particularly because of your comment that Japan might develop a standard of greater subtlety because they can predict each other better.

Among other points in the essay, they have a model of "pushiness" where people can be more direct/forceful in a negotiation (e.g. discussing where to eat) to try to take more control over the outcome, or more subtle/indirect to take less control.

They suggest that if two people are both trying to get more control they can end up escalating until they're shouting at each other, but that it's actually more common for two people to both be trying to get less control, because the reputational penalty for being too domineering is often bigger than whatever's at stake in the current negotiation, and so people try to be a little more accommodating than necessary, to be "on the safe side", and this results in people spiraling into indirection until they can no longer understand each other.

They suggested that more homogenized cultures can spiral farther into indirection because people understand each other better, while more diverse cultures are forced to stop sooner because they have more misunderstandings, and so e.g. the melting-pot USA ends up being more blunt than Japan.

They also suggested that "ask culture" and "guess culture" can be thought of as different expectations about what point on the blunt/subtle scale is "normal". The same words, spoken in ask culture, could be a bid for a small amount of control, but when spoken in guess culture, could be a bid for a large amount of control.

 

I'm quite glad to be reminded of that essay in this context, since it provides a competing explanation of how ask/guess culture can be thought of as different amounts of a single thing, rather than two fundamentally different things. I'll have to do some thinking about how these two models might complement or clash with each other, and how much I ought to believe each of them where they differ.

Reply
When Both People Are Interested, How Often Is Flirtatious Escalation Mutual?
Dweomite13d40

The paper actually includes a second experiment where they had observers watch a video recording of a conversation and say whether they thought the person on the video was flirting. Results in table 4, page 15; copied below, but there doesn't seem to be a way to format them as a table in a LessWrong comment:

Observer | Target | Flirting conditions | Accuracy (n)
Female Female Flirting 51% (187)
Female Female Non-flirting 67% (368)
Female Male Flirting 22% (170)
Female Male Non-flirting 64% (385)
Male Female Flirting 43% (76)
Male Female Non-flirting 68% (149)
Male Male Flirting 33% (64)
Male Male Non-flirting 62% (158)

Among third-party observers, females observing females had the highest accuracy, though their perception of flirting is still only 18 percentage points higher when flirting occurs than when it doesn't.

Third-party observers in all categories had a larger bias towards perceiving flirting than the people who were actually in the conversation. Though this experimental setup also had a larger percentage of people actually flirting, so this bias was actually reasonably accurate to the data they were shown.

Though, again, this study looks shoddy and should be taken with a lot of salt.

Reply
When Both People Are Interested, How Often Is Flirtatious Escalation Mutual?
Dweomite14d40

I'm confused by the study you cited. It seems to say that 14 females self-reported as flirting and that "18% (n = 2)" of their partners correctly believed they were flirting, but 2/14 = 14% and 3/14 = 21%. To get 18% of 14 would mean about 2.5 were right. Maybe someone said "I don't know" and that counted that as half-correct? If so, that wasn't mentioned in the procedure section.

It also says that 11 males self-reported as flirting, and lists accuracy as "36% (n = 5)", but 5/11 would be 45%; an accuracy of 36% corresponds to 4/11.

I don't think I trust this paper's numbers.

If we were to take the numbers at face value, though, the paper is effectively saying that female flirting is invisible. 18% correctly believed the girls were flirting when they were, but 17% believed they were flirting even when they weren't, and with only 14 girls flirting, 1% is a rounding error. So this is saying that actual female flirting has zero effect on whether her partner perceives her as flirting.

Reply
Underdog bias rules everything around me
Dweomite23d22

Agree that other players having tools, social connections, and intelligence in general all make it much harder to judge when you have the advantage. But I don't see how this answers the question of "why create underdog bias instead of just increasing the threshold required to attack?"

Strong disagree on the ancient world being zero-sum. A lion eating an antelope harms the antelope far more than it helps the lion. Thog murdering Mog to steal Mog's meal harms Mog far more than it helps Thog. I think very little in nature is zero-sum.

Reply
Underdog bias rules everything around me
Dweomite23d53

Seems weird to posit that evolution performed a hack to undermine an instinct that was, itself, evolved. If getting into conflicts that you think you can win is actually bad, why did that instinct evolve in the first place? And if it's not bad, why did evolution need to undermine it in such a general-purpose way?

I can imagine a story along the lines of "it's good to get into conflicts when you have a large advantage but not when you have a small advantage", but is that really so hard to program directly that it's better to deliberately screw up your model of advantage just so that the rule can be simplified to "attack when you have any advantage"? Accurate assessment seems pretty valuable, and evolution seems to have created behaviors much more complicated than "attack when you have a large advantage".

I agree that humans aren't very good at reasoning about how other players will react and how this should affect their own strategy, but I don't think that explains why they would have evolved one strategy that's not that vs another strategy that's not that.

(Also, I don't think Risk is a very good example of this. It's a zero-sum game, so it's mostly showing relative ability, not absolute ability. Also, the game is far removed from the ancestral environment and sending you a lot of fake signals (the strategies appropriate to the story the game is telling are mostly not appropriate to the abstract rules the game actually runs on), so it seems unsurprising to me that humans would tend to be bad at predicting behavior of other humans in this context. The rules are simple, but that's not the kind of simplicity that would make me expect humans-without-relevant-experience to make good predictions about how things will play out.)

Reply
Debugging for Mid Coders
Dweomite1mo173

A combination of the ideas in "binary search through spacetime" and "also look at your data":

If you know a previous time when the code worked, rather than starting your binary search at the halfway point between then and now, it is sometimes useful to begin by going ALL the way back to when it previously worked, and verifying that it does, in fact, work at that point.

This tests a couple of things:

  1. Are you correct about when the code previously worked?
  2. Did your attempt to recreate those conditions successfully recreate ALL of the relevant conditions?
    1. Does the bug depend on some external file or resource that you overlooked and haven't rewound back to the correct time?
    2. Does the bug depend on some detail of your test process that you didn't realize was relevant, and the reason it worked before was actually because you were testing it differently?
    3. Does your process for restoring your project to an earlier point even work?

If the bug still happens after you've restored to the "known working point", then you'll want to figure out why that is before continuing your binary search.

I don't always do this step. It depends how confident I am about when it worked, how confident I am in my restore process, and how mysterious the bug seems. Sometimes I skip this step initially, but then go back and do it if diagnosing the bug proves harder than expected.

Reply1
My Empathy Is Rarely Kind
Dweomite1mo20

Guess we're done, then.

Reply
My Empathy Is Rarely Kind
Dweomite1mo20

 Are you really unable to anticipate that this is very close to what I would have said, if you had asked me why I didn't respond to those things? The only reason that wouldn't be my exact answer is that I'd first point out that I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model! This doesn't seem like a hard one to get right, if you were extending half the charity to me that you extend yourself, you know? (should I be angry with you for this, by the way?)

You complain that I failed to anticipate that you would give the same response as me, but then immediately give a diametrically opposed response! I agreed that I didn't respond to the example you highlighted, and said this was because I didn't pick up on your implied argument. You claim that you did respond to the examples I highlighted. The accusations are symmetrical, but the defenses are very much not.

I did notice that the accusations were symmetrical, and because of that I very carefully checked (before posting) whether the excuse I was giving myself could also be extended to you, and I concluded definitively that it couldn't. My examples made direct explicit comparisons between my model and (my model of) your model, and pointed out concrete ways that the output of my model was better; it seems hugely implausible you failed to understand that I was claiming to score Bayes points against your model. Your example did not mention my model at all! (It contrasts two background assumptions, where humans are either always nice or not, and examines how your model, and only your model, interacts with each of those assumptions. I note that "humans are always nice" is not a position that anyone in this thread has ever defended, to my knowledge.)

And yes, I did also consider the meta-level possibility that my attempt to distinguish between what was said explicitly and what wasn't is so biased as to make its results useless. I have a small but non-zero probability for that. But even if that's true, that doesn't seem like a reason to continue the argument; it seems like proof that I'm so hopeless that I should just cut my losses.

I considered including a note in my previous reply explaining that I'd checked if you could use my excuse and found you couldn't, but I was concerned that would feel like rubbing it in, and the fact that you can't use my excuse isn't actually important unless you try to use it, and I guessed that you wouldn't try. (Whether that guess was correct is still a bit unclear to me--you offer an explanation that seems directly contradictory to my excuse, but you also assert that you're saying the same thing as me.)

If you are saying that I should have guessed the exact defense you would give, even if it was different from mine, then I don't see how I was supposed to guess that.

If you are saying that I should have guessed you would offer some defense, even if I didn't know the details, then I considered that moderately likely but I don't know what you think I should have done about it.

If I had guessed that you would offer some defense that I would accept then I could have updated to the position I expected to hold in the future, but I did not guess that you'd have a defense I would accept; and, in fact, you don't have one. Which brings us to...

(re-quoted for ease of reference)

I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model!

I have carefully re-read the entire reply that you made after the comment containing the two examples I accused you of failing to respond to.

Those two examples are not mentioned anywhere in it. Nor is there a general statement about "my examples" as a group. It has 3 distinct passages, each of which seems to be a narrow reply to a specific thing that I said, and none of which involve these 2 examples.

Nor does it include a claim that I've misapplied your model, either generally or related to those particular examples. It does include a claim that I've misunderstood one specific part of your model that was completely irrelevant to those two examples (you deny my claim that the relevant predictions are coming from a part of the person that can't be interrogated, after flagging that you don't expect me to follow that passage due to inferential distance).

Your later replies did make general claims about me not understanding your model several times. I could make up a story where you ignored these two examples temporarily and then later tried to address them (without referencing them or saying that that was what you were doing), but that story seems neither reasonable nor likely.

Possibly you meant to write something about them, but it got lost in an editing pass?

Or (more worryingly) perhaps you responded to my claim that you had ignored them not by trying to find actions you took specifically in response to those examples, but instead by searching your memory of everything you've said for things that could be interpreted as a reply, and then reported what you found without checking it?

In any case: You did not make the response you claimed that you made, in any way that I can detect.

 

Communication is tricky!

Sometimes both parties do something that could have worked, if the other party had done something different, but they didn't work together, and so the problem can potentially be addressed by either party. Other times, there's one side that could do something to prevent the problem, but the other side basically can't do anything on their own. Sometimes fixing the issue requires a coordinated solution with actions from both parties. And in some sad situations, it's not clear the issue can be fixed at all.

It seems to me that these two incidents both fall clearly into the category of "fixable from your side only". Let's recap:

(1) When you talked about your no-anger fight, you had an argument against my model, but you didn't state it explicitly; you relied on me to infer it. That inference turned out to be intractable, because you had a misunderstanding about my position that I was unaware of. (You hadn't mentioned it, I had no model that had flagged that specific misunderstanding as being especially likely, and searching over all possible misunderstandings is infeasible.)

There's an obvious, simple, easy, direct fix from your side: State your arguments explicitly. Or at least be explicit that you're making an argument, and you expect credit. (I mistook this passage as descriptive, not persuasive.)

I see no good options from my side. I couldn't address it directly because I didn't know what you'd tried to do. Maybe I could have originally explained my position in a way that avoided your misunderstanding, but it's not obvious what strategy would have accomplished that. I could have challenged your general absence of evidence sooner--I was thinking it earlier, but I deferred that option because it risked degrading the conversation, and it's not clear to me that was a bad call. (Even if I had said it immediately, that would presumably just accelerate what actually happened.)

If you have an actionable suggestion for how I could have unilaterally prevented this problem, please share.

(2) In the two examples I complained you didn't respond to, you allege that you did respond, but I didn't notice and still can't find any such response.

My best guess at the solution here is "you need to actually write it, instead of just imagining that you wrote it." The difficulty of implementing that could range from easy to very hard, depending on the actual sequence of events that lead to this outcome. But whatever the difficulty, it's hard to imagine it could be easier to implement from my side than yours--you have a whole lot of relevant access to your writing process that I lack.

Even assuming this is a problem with me not recognizing it rather than it not existing, there are still obvious things you could do on your end to improve the odds (signposting, organization, being more explicit, quoting/linking the response when later discussing it). Conversely, I don't see what strategy I could have used other than "read more carefully," but I already carefully re-read the entire reply specifically looking for it, and still can't find it.

 

I understand it's possible to be in a situation where both sides have equal quality but both perceive themselves as better. But it's also possible to be in a situation where one side is actually better and the other side falsely claims it's symmetrical. If I allowed a mere assertion of symmetry from the other guy to stop me from ever believing the second option, I'd get severely exploited. The only way I have a chance at avoiding both errors is by carefully examining the actual circumstances and weighing the evidence case-by-case.

My best judgment here is that the evidence weighs pretty heavily towards the problems being fixable from your side and not fixable from my side. This seems very asymmetrical to me. I think I've been as careful as I reasonably could have been, and have invested a frankly unreasonable amount of time into triple-checking this.

 

Before I respond to your other points, let me pause and ask if I have convinced you that our situation is actually pretty asymmetrical, at least in regards to these examples? If not, I'm disinclined to invest more time.

Reply
My Empathy Is Rarely Kind
Dweomite1mo20

I don't think that's fair. For one, your model said you need anger in order to retaliate, and I gave an example of how I didn't need anger in order to retaliate.

I didn't respond to this because I didn't see it as posing any difficulty for my model, and didn't realize that you did.

I don't think you need anger in order to retaliate. I think anger means that the part of you that generates emotions (roughly, Kahneman's system 1) wants to retaliate. Your system 2 can disagree with your system 1 and retaliate when you're not angry.

Also, your story didn't sound to me like you were actually retaliating. It sounded to me like you were defending yourself, i.e. taking actions that reduced the other guy's capability of harming you. Retaliation (on my model) is when you harm someone else in an effort to change their decisions (not their capabilities), or the decisions of observers.

So I'm quite willing to believe the story happened as you described it, but this was 2 steps removed from posing any problem to my model, and you didn't previously explain how you believed it posed a problem.

I also note that you said "for one" (in the quote above) but then there was no number two in your list.

If you wait to see signs that the person is being forced to choose between changing their own mind or ignoring data, then you have a much more solid base.

I do see a bunch of signs of that, actually:

  • I claimed that your example of your friend being afraid until their harness broke seems to be better explained by my model than yours, because that would be an obvious time for the recommended action to change but a really weird time for his prediction error to disappear. You did not respond to this point.
  • I claimed that my model has an explanation for how different negative emotions are different and why you experience different ones in different situations, and your model seemingly does not, and this makes my model better. You did not respond to this point.
  • I asked you if you had a way of measuring whatever you mean by "prediction error", so that we could check how well the measurements fit your model. You told me to use my own feelings of surprise. When I pointed out that doesn't mach your model, you said that you meant something different, but didn't clarify what you meant, and did not provide a new answer to the earlier question about how you measure "prediction error". This looks like you saying whatever deflects the current point without keeping track of how the current point is related to previous points.
    • Note that I don't actually need to understand what you mean in order for the measurement to be interesting. You could hand me a black box and say "this measures the thing I'm talking about" and if the black box produces measurements that correlate with your predictions that would be interesting even if I have no clue how the black box works (as long as I don't see an uninteresting way of deriving your predictions from its inputs). But you haven't done this, either.
  • I gave an example where I made an explicit prediction, and then was angry when it came true. You responded by ignoring my example and substituting your own hypothetical example where I made an explicit prediction and then was angry when it was falsified. This looks like you shying away from examples that are hard for your theory to explain and instead rehearsing examples that are easier.
  • You have claimed that there's evidence in your other writing, but have refused to prioritize it so that I can find your best evidence as quickly as possible. This looks like an attempt to dissuade me from checking your claims by maximizing the burden of effort placed on me. In a cooperative effort of truth-seeking, you ought to be the one performing the prioritization of your writing because you have a massive advantage in doing so.
  • Many of your responses seem like you are using my points to launch off on a tangent, rather than addressing my point head-on.

So "Yes, I'm talking about our models of how the world should work", and also that is necessarily the same as our models of how the world does work -- even if we also have meta models which identify the predictable errors in our object level models and try to contain them. 

This seems like it's just a simple direct contradiction. You're saying that model X and model Y are literally the same thing, but also that we keep track of the differences between them. There couldn't be any differences to track if they were actually the same thing.

I also note that you claimed these are "necessarily" the same, but provided no reasoning or evidence to back that up; it's just a flat assertion.

At the same time, I'm curious if you've thought about how it looks from my perspective. You've written intelligent and thoughtful responses which I appreciate, but are you under the impression that anything you've written provides counter-evidence? Do you picture me thinking "Yes, that's what I'm saying" before you argue against what you think I'm saying?

There are some parts of your model that I think I probably roughly understand, such as the fact that you think there's some model inside a person making predictions (but it's not the same as the predictions they profess in conversation) and that errors in these predictions are a necessary precondition to feeling negative emotions. I think I can describe these parts in a way you would endorse.

There are some parts of your model that I think I probably don't understand, like where is that model actually located and how does it work.

There are some parts of your model that I think are incoherent bullshit, like where you think "should" and "is" models are the same thing but also we have a meta-model that tracks the differences between them, or where you think telling me to pay attention to my own feelings of surprise makes any sense as a response to my request for measurements.

I don't think I've written anything that directly falsifies your model as a whole--which I think is mostly because you haven't made it legible enough.

But I do think I've pointed out:

  • several ways in which my model wins Bayes points against yours
  • several ways that your model creates more friction than mine with common-sensical beliefs across other domains
  • several ways in which your own explanations of your model are contradictory or otherwise deficient
  • that there is an absence of support on your side of the discussion

I don't think I require a better understanding of your model than I currently have in order for these points to be justified.

Reply
Load More