Comment author: Bound_up 08 October 2016 04:57:21PM 0 points [-]

I'm looking for an SSC post.

Scott talks about how a friend says he always seems to know what's what, and Scott says "Not, really; I'm the first to admit my error bars are wide and that my theories are speculative, often no better than hand-waving."

They go back and forth, with Scott giving precise reasons why he's not always right, and then he says "...I'm doing it right now, aren't I?"

Something like that. Can anybody point me to it?

Comment author: jimmy 08 October 2016 07:11:22PM 3 points [-]

An excellent post, but not Scott :)

http://mindingourway.com/confidence-all-the-way-up/

Comment author: Gram_Stone 05 July 2016 10:32:26PM 1 point [-]

I get the kinds of things that you're talking about, but we're strictly talking about the argument "If Gram had been a drug addict, then he would know what kind of plan I actually need." Even if we take as an assumption that I have been a drug addict, then it does not follow that I am better at making plans that turn addicts into nonaddicts. If anything, I probably get the epistemic advantage from not being wireheaded. This is not about saying that there are times when someone's feelings don't have instrumental or moral weight. This is about saying that sometimes, people will make you think that an argument that includes knowledge of someone's values as a proposition is itself a value judgment, making something that should not be off limits into something that is off limits. I can say, "No, I would not be better able to help you if I became a drug addict. That argument can be false even if its premises are assumed true." If I stop talking about logical validity, which is always free game, and start being someone who blows off other people's feelings for no good reason, then cut my head off.

It's perhaps worth mentioning that this was a short encounter after a long separation, so this was an urgent situation where you cannot allow an addict to argue for credibility from expertise.

Let me know if this doesn't address your concerns in any way.

Comment author: jimmy 07 July 2016 05:56:26PM 1 point [-]

I’m not saying you’re wrong. I’m not saying that you can afford to let logically invalid arguments go unchallenged as if there was nothing wrong with them. Or that emotions ought to be free from criticism or something. Or that you haven’t earned your confidence or that her listening to you wouldn’t be massively beneficial for her. And I certainly don’t see you as someone who blows off other people’s feelings for no reason - in fact, a big reason I wanted to respond to your comment was because I got the exact opposite impression from you. I’m sorry if it came across otherwise.

If you want I can try to explain more carefully what I was getting at, but I certainly don't want to drag you into a conversation like this if it's not something you want to get into here or now. I'm actually in a somewhat similar situation myself so I’m well aware that it’s not always the time for that kind of thing.

Comment author: Gram_Stone 05 July 2016 04:59:05PM 5 points [-]

You have special hardware for simulating others' cognition. Neurologically, imagining how someone feels is a completely different thing from imagining a collection of 35 apples.


I can't tell what context you're getting this from, but I've seen "You don't understand how I feel!" used as bad epistemology.

My sister's a heroin addict, and she'll use the fact that I've never been addicted to heroin or experienced opioid withdrawals as a debate tactic. It goes something like:

  1. Only plans to kill my sister's addiction that account for my sister's feelings will work.

  2. Only my sister can fully account for my sister's feelings.

  3. Therefore, only my sister can invent successful plans to kill her addiction.

  4. As a corollary, anyone else's plans to kill my sister's addiction will fail.

It is known that heroin addicts invent good-looking plans for killing their addictions, but do not invent good plans for killing their addictions. By this argument she can ensure that all plans to kill her addiction will always eventually fail.

The epistemically correct response, even if it's not necessarily persuasive in this form (for otherwise I would have persuaded her), is to say that I don't actually need to experience what she has to come up with good plans for killing addictions. "Not knowing what it's like to be an addict doesn't make me bad at making decisions about addictions," pattern-matches to, "I don't empathize with you," and, if she really wasn't listening, "I claim to know more about your own phenomenal experiences than you."

Sometimes how someone feels really doesn't matter, in really specific cases. That is, sometimes it's not necessary for an argument to follow. If you let people conflate this specific and useful objection with a more general sort of paternalism where you always ignore the relevance of everyone's feelings, then you might flinch from being right or doing right.

Comment author: jimmy 05 July 2016 08:16:31PM 3 points [-]

Eek, I'd be really really careful with arguments like that.

If she doesn't agree that this is one of those cases where what she feels doesn't matter, why doesn't she? Maybe when she sees you as being insufficiently empathetic, it's on this level, not on the object level of how much her feelings about specific plans matter?

If she doesn't give her stamp of approval to your description of her thoughts, how would you know if you had it wrong? How would you notice if you were missing something important?

Comment author: jimmy 04 April 2016 05:00:06PM *  1 point [-]

I would definitely agree with the "I am a person who keeps promises" and "I am a person that's loyal [...]" bits, but neither of those feel the same as "I am a democrat" or even "I am a seeker of accurate world models [...]" type sentences. They're still not identities for me.

"Identities" tend to get treated as things that "have to" be true. A strong identity democrat might be insulted and defensive if you suggest that their stance on some issue is too conservative. Likewise a strong identity "rationalist" might get defensive and cook up some rationalizations if you suggest that they're biased against a certain view and not accurately seeking truth.

It's the "has to be true" part that causes problems. Am I someone that seeks accurate models, whatever they turn out to be? For the most part, yeah. Inaccurate models, especially without accurate metamodels that warn you not to use them, are pretty problematic and it really takes a twisted circumstance to make it worthwhile. But this isn't a fact about my identity, it's a fact about the world that accurate models get me more of what I want so that I generally want accurate models (and I'm willing to sacrifice a lot of "fit in without hiding beliefs" and the like in order to get them).

"I am a person who keeps promises" is likewise just a matter of fact. I am. Not being so would mean people have no reason to trust me and that would be very bad for me - so I make sure I don't give people not to trust me. It's still allowed to be untrue and I'm always allowed to consider breaking promises. It's just that doing so would be dumb, so I don't. Even when I could get away with it, since that weakens the story and my ability to credibly signal that it's true and I probably wouldn't be able to pull off the "forgot my lunch money" version of parfit's hitchhiker anymore.

In short, I have no problems having beliefs about what kind of person I am, but without exceptions I don't want motivations to believe - even when I have motivations to make it true.

In response to Tonic Judo
Comment author: Dustin 02 April 2016 09:36:05PM *  2 points [-]

I've had similar sort of conversations (with me on your side) for 25 years. I've received feedback many times that I'm a good listener and I've never gotten any feedback that I come across as an asshole.

There's been very little change in the people with whom I've had these conversations except for them to acknowledge that we'd had the conversation in the past and it hadn't changed their emotional reaction to whatever situation.

So, for example, if my past experience is any guide (and I fully acknowledge the tentativeness of this), your friend will have the exact same reaction next time someone takes his comb but with "yes, I remember our conversation from last time" tacked on to the end.

In general, people don't seem to be very good at reasoning themselves out of non-constructive responses.

In response to comment by Dustin on Tonic Judo
Comment author: jimmy 03 April 2016 07:17:36PM *  3 points [-]

Say someone takes the guy's comb again and he has the same emotional reaction with "yes, I remember our conversation from last time" tacked onto the end. How do you think Gram_Stone would respond to that? How would you?

I think it's a big mistake to take it as an example of him "being bad at reasoning himself out of non-constructive responses". To do so frames the problem as external to you and internal to him - that is, something not under your direct control.

If we go back and look at Gram's explanation for why what he did worked, it has to do with giving consideration to the idea that the outburst is warranted and meeting them where they're at so that rational argument has a chance to reach them at an emotional level. Framing them as irredeemably irrational not only writes the problem off as insoluble (and therefore mental stop-signs you before you can get to the answer) but it does so by failing to do the the exact thing that got Gram the results (remember, his friend started off angry and ended up laughing - his arguments did connect on an emotional level and even if he gets angry again next time his comb is taken, I bet ya he didn't get angry again about that instance of comb stealing!)

Perhaps we're of the belief that it wasn't just this instance of anger that is misguided but rather all instances (and that he will continue to have these types of emotional responses), but this is a very different thing than "he keeps emotionally 'forgetting' what we talked about!". The latter just isn't true. He won't get angry about this offense again. The issue is that you think the arguments should cause him to generalize further then he is generalizing, which is a very very different disagreement than the initial one over whether his current anger was justified. If you track these precisely, you'll find that people never emotionally forget, but they will fail to make connections sometimes and they will disagree with you on things that you thought obviously followed.

On emotional responses like these, it turns out that the issues are more complicated and inherently harder to generalize than you'd naively think. Perhaps it's partly me failing the art of going meta, but in my experience, training someone in empathy (for example) requires many many "and this response works here too" experiences before they all add up to an expectation for empathy to work in a new situation that seems unlike anything they've seen it work in before.

There is an important caveat here which is that if people never actually emotionally change their minds but merely concede that they cannot logically argue their emotions, they'll continue to have their emotions. It's not emotionally forgetting because they never changed their emotions, but it can seem that way if they did start to suppress them once they couldn't justify them. The important thing here is to look for and notice signs of suppression vs signs of shifting. That will tell you whether you've ratcheted in some progress or not (and therefore whether you're being sufficiently empathetic enough).

If you're constantly getting feedback as a good listener and never feedback that you're an asshole, you're probably falling into this error mode at least sometimes because often the mental/emotional spaces people need to be pushed into in order to change their emotional mindsets are inherently "assholish" things. However, this isn't a bad thing. In those cases, the feedback should look like this example from Frank Farrely's book "Provocative Therapy"

"(Sincerely, warmly.): You're the kindest, most understanding man I ever met in my entire life - (Grinning) wrapped up in the biggest son of a bitch I ever met. (T. and C. laugh together.)."

In my opinion, by far the most important part of learning this art is knowing that it exists and that any failures are your own. Once you have that internalized, picking up the rest kinda happens automatically.

Comment author: LessWrong 08 February 2016 12:07:36PM 1 point [-]

Alright. I give up. I'm now convinced my methodology was bad. I should read a book.

Upvoted for updating my beliefs.

Comment author: jimmy 09 February 2016 11:53:08PM *  0 points [-]

I agree with what Christian is saying, but that doesn't make Manson wrong either.

The difference is in the context. There's a lot of nuance to it, but one big piece that hasn't been mentioned yet is that saying things in person allows you use nonverbal communication to signal things you cannot signal in text.

A message like "You're cute. I'd like to get to know you" opens you up to rejection, and a willingness to face this unafraid is attractive because it's a fairly credible way of showing that you must have reason think you're worthy of her - stuff like that. Online, anyone can shoot off a "You're cute. I'd like to get to know you" without having to be able to back it up. Even if you can't say with a straight face that you're good enough, you can hit ctrl-v and send on the hope that she bites anyway - which is why the line won't have the same oomph behind it as it can in person.

Comment author: jimmy 15 December 2015 11:53:00PM *  9 points [-]

Thank you.

One thing I've found to be helpful when continuously deciding to look into the pain is remembering that I don't have to (or don't have to right now, at least). Staring into the pain is exhausting, and trying to force yourself to do it faster than you're ready for can add to the pile of pain to deal with.

At the same time, the reasons to do it (and do it now) don't go away. It's just easier to do it without distraction or self-deception when you're allowed to take a break if need be.

Comment author: Lumifer 30 November 2015 05:44:36PM 0 points [-]

From the inside it can be very tough to tell, but from the outside they're clearly they're wrong about them all being low probability.

I don't know about that. That clearly depends on the situation -- and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.

if you learn you're wrong, if you can't learn how you're wrong and in which direction to update even after thinking about it

What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.

The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I'm not overconfident is a trap I don't want to fall into.

That's a good point -- I agree that if you don't realize what opportunity costs you are incurring, your cost-benefit analysis might be wildly out of whack. But again, the issue is how do you reliably distinguish ex ante where you need to examine things very carefully and where you do not have to do this. I expect this distinguishing to be difficult.

"Actually thinking it through" is all well and good, but it basically boils down to "don't be stupid" and while that's excellent advice, it's not terribly specific. And "can you eat the loss?" is not helping much. For example, let's say one option is me going to China and doing a start-up there. My "internal model" says this is a stupid idea and I will fail badly. But the "loss" is not becoming a multimillionaire -- can I eat that? Well, on the one hand I can, of course, otherwise I wouldn't have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let's say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?

Comment author: jimmy 02 December 2015 02:45:47AM 0 points [-]

I don't know about that. That clearly depends on the situation -- and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.

I mean about the whole group of things that any given person decides or would decide is "low probability". I see plenty of "p=0" cases being true, which is plenty to show that the group "p=0" as a whole is overconfident - I'm not trying to narrow it down to a group where they're probably wrong, just overconfident.

What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.

It's not that they can't learn that the box isn't really there, it's that even if they know it's not there they don't know how to climb out of it.

There are a lot of things I know I might be wrong about (and care about) that I don't look into further. It's not that I think it's unlikely that there's anything for me to find, but that it's unlikely for me to find it in the next unit of effort. Even if someone is working with an obviously broken model with no attempts to better their model, it doesn't necessarily mean they haven't seriously considered the possibility that they're wrong. It might just mean that they don't know in which direction to update and are stuck working with a shitty model.

Some things are like saying "check your shoelaces". Others are like saying "check your shoelaces" to a kid too young to know how to tie his own shoes.

"Actually thinking it through" is all well and good, but it basically boils down to "don't be stupid" and while that's excellent advice, it's not terribly specific.

Heh. Yes, it is difficult and I expect that just comes with the territory. And yes, it kinda sorta just boils down to "don't be stupid". The funny thing is that when dealing with people who know me (and therefore get the affection and intent behind it) "don't be stupid" is often advice I give, and it gets the intended results. The specificity of "you're doing something stupid right now" is often enough.

And "can you eat the loss?" is not helping much. For example, let's say one option is me going to China and doing a start-up there. My "internal model" says this is a stupid idea and I will fail badly. But the "loss" is not becoming a multimillionaire -- can I eat that? Well, on the one hand I can, of course, otherwise I wouldn't have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let's say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?

I'd much prefer to be a multimillionaire too, yet I'm comfortable with choosing not to pursue a startup in china because I am sufficiently confident that it is not the best thing for me to pursue right now - and I'm sufficiently confident that I wouldn't change my mind if I looked into it a little further. It's not that I don't care about millions of dollars, its that when multiplied by the intuitive chance that thinking one step further will lead to me having it, it rounds down to an acceptable loss.

If, on the other hand, when you look at it you hear this little voice that says "Eek! Millions of dollars is a lot! How do I know that I shouldn't be pursuing a china startup!?", then yes, I'd say you should think about it (or how you make those kinds of decisions) until you're comfortable eating that potential loss instead of living your life by pushing it away.

You say "don't be stupid" as if it's something that we're beyond as a general rule. I see it as something that takes a whole lot of thought to figure out how not to be stupid this way. Once I started paying attention to these signs of incongruity, I started to recognizing it everywhere. Even in places that used to be or still are outside my "box".

Comment author: jimmy 01 December 2015 06:19:04PM *  4 points [-]

Precise language is great when you have a precise message. Often what we are trying to convey is itself not precise.

If words paint mental pictures, then precise language is like sending a photograph where everything is in full focus. It's great in that there's a lot of information there, but it's not always clear what to do with it and focusing on the wrong bits can get in the way of an important message.

Instead you can turn down the depth of field such that the only thing in any focus is the object you're pointing at - and even then only in sufficient detail to identify it and no more. Then you have no choice but to recognize the actual intended message because you were careful not to dilute your point with extraneous information that often comes with careless use of language.

If I'm pointing out a tiger, not only do I want to make sure you don't get distracted looking at the flowers, I want to make sure you can't argue with me over which kind of tiger it is if I happen to guess wrong.

Comment author: Lumifer 20 November 2015 04:37:09PM *  2 points [-]

Yes, I agree that people sometimes construct a box for themselves and then become terribly fearful of stepping outside this box (="this is impossible"). This does lead to them either not considering at all the out-of-the-box options or assigning, um, unreasonable probabilities to what might happen once you step out.

The problem, I feel, is that there is no generally-useful advice that can be given. Sometimes your box is genuinely constricting and you'd do much better by getting out. But sometimes the box is really the best place (at least at the moment) and getting out just means you become lunch. Or you wander in the desert hoping for a vision but getting a heatstroke instead.

You say

I don't think people take seriously the idea that taking negligible in-model probabilities seriously will pay off on net

but, well, should they? My "in-model probabilities" tell me that I'm not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.

Given that, I am very hesitant to round p=epsilon down to p=0

Sure. But things have costs. If the costs (in time, effort, money, opportunity) are high enough, you don't care whether it's epsilon or a true zero, the proposal fails the cost-benefit test anyway.

Comment author: jimmy 28 November 2015 08:03:37PM 1 point [-]

but, well, should they?

Yes. From the inside it can be very tough to tell, but from the outside they're clearly they're wrong about them all being low probability. They don't check for potential problems with the model before trusting it without reservation, and that causes them to be wrong a lot. Even if your "might as well be 100%" is actually 97% - which is extremely generous, you'll be wrong about these things on a regular basis. It's a separate question of what - if anything - to do about it, but I'm not going to declare that I know there's nothing for me to do about it until I'm equally sure of that.

I think one of the real big things that makes the answer feel like "no" is that even if you learn you're wrong, if you can't learn how you're wrong and in which direction to update even after thinking about it, then there's no real point in thinking about it. If you can't figure it out (or figure out that you can trust that you've figured it out) even when it's pointed out to you, then there's less point in listening. I think "I don't see what I can do here that would be helpful" often gets conflated with "it can't happen", and that's a mistake. The proper way to handle those doesn't involve actively calling them "zero". It involves calling them "not worth thinking about" and the like. There is nothing to be gained by writing false confidences in mental stone and much to be lost.

My "in-model probabilities" tell me that I'm not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.

Right. With the lottery, you have more than a vague intuitive "very low odds" of winning. You have a model that precisely describes the probability of winning and you have a vague intuitive but well backed "practically certain" odds of your model being correct. If I were to ask "how do you know that your odds are negligible?" you'd have an answer because you've already been there. If I were to ask you "well how do you know that your model of how the lottery works is right?" you could answer that too because you've been there too. You know how you know how the lottery works. Winning the lottery may be a very big win, but the expected value of thinking about it further is still very low because you have detailed models and metamodels that put firm bounds on things.

At the end of the day, I'm completely comfortable saying "it is possible that it would be a very costly mistake to not think harder about whether winning the lottery might be doable or how I'd go about doing it if it were AND I'm not going to think about it harder because I have better things to do".

If I were gifted a lotto ticket and traded it for a burrito, I'd feel like it was a good trade. Even if the lottery ticket ended up winning the jackpot, I could stand there and say "I was right to trade that winning lotto ticket for a burrito" and not feel bad about it. It'd be a bit of a shock and I'd have to go back and make sure that I didn't err, but ultimately I wouldn't have any regrets.

If, say, it was given to me as a "lucky ticket" with a wink and a nod by some mob boss whose life I'd recently saved... and I traded it for a freaking burrito because "it's probably 1 in 100 million, and 1 in 100 million isn't worth taking seriously"... I'd be kicking myself real hard for not taking a moment to question the "probably" when I learned that I traded a winning ticket for a burrito.

And all those times the ticket from the mob boss didn't win (or I didn't realize it won because I traded it for a burrito) would still be tremendous mistakes. Just invisible mistakes if I don't stop to think and it doesn't happen to whack me in the head. The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I'm not overconfident is a trap I don't want to fall into.

My brief attempt at "general advice" is to make sure you actually think it through and would be not just willing to but comfortable eating the loss if you're wrong. If you're not, there's your little hint that maybe you're ignoring something important.

When I point people to these considerations ("you say you're sure, so you'd be comfortable eating that loss if it turns out not to be the case, the vast majority of the times when they stop deflecting and give a firm "yes" or "no", the answer is "no" - and they rethink things. There are all sorts of caveats here, but the main point stands - when its important, most people conclude they're sure without actually checking to their own standards.

That's just not making bad decisions relative to your own best models/metamodels - you can still make bad decisions by more objective standards. This can't save you from that but what it can do is make sure your errors stand out and don't get dismissed prematurely. In the process of coming to say "yes, and I can eat the loss if I'm wrong" you end up figuring out what kinds of things you don't expect to see and committing to the fact that your model predicts they shouldn't happen. This makes it a lot easier to both notice the fact that your model is wrong and harder to let yourself get away with pretending it isn't.

View more: Next