flowerfeatherfocus

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Can you say what position you recommend instead? Is it just opining publicly about everything, with no regard to how taboo it is?

Is there another strategy you prefer? Afaict the options are 

1) Have public taboo beliefs.

2) Have private beliefs that you lie about. 

3) Remain deliberately agnostic about taboo but insufficiently important topics.

4) Get forever lucky, such that every taboo topic you investigate results in you honestly arriving at an allowed belief.

Whether 1) is at all compatible with having other career goals is a fact of the territory, and I expect in the US in 2024, there are topics where having taboo beliefs could totally end your career, for many values of career. (Leaving open whether there are such beliefs that are true, but, per the topic of this post, that's not something you can learn without taking risks.)

2) seems even more prone to the effect you describe than 3).

My guess is you're making a bid for 1), but I feel like a case for that should take into account the costs of believing X weighed against the costs of agnosticism about X, rather than a sweeping heuristic argument. (Where maybe the cost of agnosticism about X includes adjacent topics Y you'll either have to include in your agnosticism or otherwise eat the cost of ~X connotations, though I'm skeptical about how often this will come up, and per my comment here I expect the ~Y->~X taboos will often be much smaller than the ~X taboo.)

I expect this effect to be weaker than you're suggesting, especially if Y is something you in fact independently care about, and not an otherwise unimportant proximal detail that could reasonably be interpreted as a "just asking questions" means of arguing for ~X. I'm struggling to think of a particularly illustrative X and Y, but consider X="COVID was not a lab leak", which seemed lightly taboo to disagree with in 2020.  Here's a pair of tweets you could have sent in 2020:
1. "I think COVID was probably a lab leak."
2. "I don't know whether COVID was a lab leak. (In fact for now I'm intentionally not looking into it, because it doesn't seem important enough to outweigh the risk of arriving at taboo beliefs.) But gain-of-function research in general is unacceptably risky, in a way that makes global pandemic lab leaks a very real possibility, and we should have much stronger regulations to prevent that."

I expect the second one would receive notably less push back, even though it defends Y="gain of function research is unacceptably risky", and suggests that Y provides evidence for ~X.

I also frequently find myself in this situation. Maybe "shallow clarity"?

A bit related, "knowing where the 'sorry's are" from this Buck post has stuck with me as a useful way of thinking about increasingly granular model-building.

Maybe a productive goal to have when I notice shallow clarity in myself is to look for the specific assumptions I'm making that the other person isn't, and either
a) try to grok the other person's more granular understanding if that's feasible, or

b) try to update the domain of validity of my simplified model / notice where its predictions break down, or

c) at least flag it as a simplification that's maybe missing something important.

It seems this isn't true, excepting only the title and the concluding question. FWIW this wasn't at all obvious to me either.

Separate from the specific claims, it seems really unhelpful to say something like this in such a deliberately confusing, tongue-in-cheek way. It's surely unhelpful strategically to be so unclear, and it also just seems mean-spirited to blur the lines between sarcasm and sincerity in such a bleak and also extremely confident write-up, given that lots of readers regard you as an authority and take your thoughts on this subject seriously.

I’ve heard from three people who have lost the better part of a day or more trying to mentally disengage from this ~shitpost. Whatever you were aiming for, it's hard for me to imagine how this hasn't missed the mark.

Yeah, agreed :) I mentioned  existing as a surreal in the original comment, though more in passing than epsilon. I guess the name Norklet more than anything made me think to mention epsilon--it has a kinda infinitesimal ring to it. But agreed that  is a way better analog.

This is great! It reminds me a bit of ordinal arithmetic, in which addition is non-commutative. The ordinal numbers begin with all infinitely many natural numbers, followed by the first infinite ordinal, . The next ordinal is , which is greater than . But  is just 

Subtraction isn't canonically defined for the ordinals, so  isn't a thing, but there's an extension of the ordinal numbers called the surreal numbers where it does exist. Sadly addition is defined differently on the surreals, and here it is commutative.  does exist though, and as with Norahats  does equal .

The surreals also contain the infinitesimal number , which is greater than zero but less than any real number. it's defined as the number between  on the left and all members of the infinite sequence  on the right. Not exactly Norklet (), but not too far away:  :)

(h/t Alex_Altair, whose recent venture into this area caused me to have any information whatsoever about it in my head)