1 min read

4

This is a special post for quick takes by bgold. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
23 comments, sorted by Click to highlight new comments since:
  • Why do I not always have conscious access to my inner parts? Why, when speaking with authority figures, might I have a sudden sense of blankness.
  • Recently I've been thinking about this reaction in the frame of 'legibility', ala Seeing like a State. State's would impose organizational structures on societies that were easy to see and control - they made the society more legible - to the actors who ran the state, but these organizational structure were bad for the people in the society.
    • For example, census data, standardized weights and measures, and uniform languages make it easier to tax and control the population. [Wikipedia]
  • I'm toying with applying this concept across the stack.
    • If you have an existing model of people being made up of parts [Kaj's articles], I think there's a similar thing happening. I notice I'm angry but can't quite tell why or get a conceptual handle on it - if it were fully legible and accessible to conscious mind, then it would be much easier to apply pressure and control that 'part', regardless if the control I am exerting is good. So instead, it remains illegible.
    • A level up, in a small group conversation, I notice I feel missed, like I'm not being heard in fullness, but someone else directly asks me about my model and I draw a blank, like I can't access this model or share it. If my model were legible, someone else would get more access to it and be able to control it/point out its flaws. That might be good or it might be bad, but if it's illegible it can't be "coerced"/"mistaken" by others.
    • One more level up, I initially went down this track of thinking for a few reasons, one of which was wondering why prediction forecasting systems are so hard to adopt within organizations. Operationalization of terms is difficult and it's hard to get a precise enough question that everyone can agree on, but it's very 'unfun' to have uncertain terms (people are much more likely to not predict then predict with huge uncertainty). I think the legibility concept comes into play - I am reluctant to put a term out that is part of my model of the world and attach real points/weight to it because now there's this "legible leverage point" on me.
      • I hold this pretty loosely, but there's something here that rings true and is similar to an observation Robin Hanson made around why people seem to trust human decision makers more than hard standards.
  • This concept of personal legibility seems associated with the concept of bucket errors, in that theoretically sharing a model and acting on the model are distinct actions, except I expect often legibility concerns are highly warranted (things might be out to get you)

Related: Reason as memetic immune disorder

I like the idea that having some parts of you protected from yourself makes them indirectly protected from people or memes who have power over you (and want to optimize you for their benefit, not yours). Being irrational is better than being transparently rational when someone is holding a gun at your head. If you could do something, you would be forced to do it (against your interests), so it's better for you if you can't.

But, what now? It seems like rationality and introspection is a bit like defusing a bomb -- great if you can do it perfectly, but it kills you when you do it halfways.

It reminds me of a fantasy book which had a system of magic where wizards could achieve 4 levels of power. Being known as a 3rd level wizard was a very bad thing, because all 4th level wizards were trying to magically enslave you -- to get rid of a potential competitor, and to get a powerful slave (I suppose the magical cost of enslaving someone didn't grow up proportionally to victim's level).

To use an analogy, being biologically incapable of reaching 3rd level of magic might be an evolutionary advantage. But at the same time, it would prevent you from reaching the 4th level, ever.

Thanks for including that link - seems right, and reminded me of Scott's old post Epistemic Learned Helplessness

The only difference between their presentation and mine is that I’m saying that for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy

I kinda think this is true, and it's not clear to me from the outset whether you should "go down the path" of getting access to level 3 magic given the negatives.

Probably good heuristics are proceeding with caution when encountering new/out there ideas, remembering you always have the right to say no, finding trustworthy guides, etc.

  • Yes And is an improv technique where you keep the energy in a scene alive by going w/ the other persons suggestion and adding more to it. "A: Wow is that your pet monkey? B: Yes and he's also my doctor!"
  • Yes And is generative (creates a lot of output), as opposed to Hmm No which is critical (distills output)
  • A lot of the Sequences is Hmm No
  • It's not that Hmm No is wrong, it's that it cuts off future paths down the Yes And thought-stream.
  • If there's a critical error at the beginning of a thought that will undermine everything else then it makes sense to Hmm No (we don't want to spend a bunch of energy on something that will be fundamentally unsound). But if the later parts of the thought stream are not closely dependent on the beginning, or if it's only part of the stream that gets cut off, then you've lost a lot of potential value that could've been generated by the Yes And.
  • In conversation yes and is much more fun, which might be why the Sequences are important as a corrective (yeah look it's not fun to remember about biases, but they exist and you should model/include them)
  • Write drunk, edit sober. Yes And drunk, Hmm No in the morning.

neat hadn't seen that thanks

I have a cold, which reminded me that I want fashionable face masks to catch on so that I can wear them all the time in cold-and-flu season without accruing weirdness points.

Looks like the Monkey's Paw curled a finger here ...

... my god...

[-]bgold120
  • Cumulative Y2K readiness spending was approximately $100 billion, or about $365 per U.S. resident.
  • Y2K spending started as early 1995, and appears t peaked in 1998 and 1999 at about $30 billion per year.

https://www.commerce.gov/sites/default/files/migrated/reports/y2k_1.pdf

Depression as a concept doesn't make sense to me. Why on earth would it be fitness enhancing to have a state of withdrawal, retreat, collapse where a lack of energy prevents you from trying new things? I've brainstormed a number of explanations:

    • depression as chemical imbalance: a hardware level failure has occurred, maybe randomly maybe because of an "overload" of sensation
    • depression as signaling: withdrawal and retreat from the world indicates a credible signal that I need help
    • depression as retreat: the environment has become dangerous and bad and I should withdraw from it until it changes.

I'm partial to the explanation offered by the Predictive Processing Model, that depression is an extreme form of low confidence. As SSC write:

imagine the world’s most unsuccessful entrepreneur. Every company they make flounders and dies. Every stock they pick crashes the next day. Their vacations always get rained-out, their dates always end up with the other person leaving halfway through and sticking them with the bill.
What if your job is advising this guy? If they’re thinking of starting a new company, your advice is “Be really careful – you should know it’ll probably go badly”.
if sadness were a way of saying “Things are going pretty badly, maybe be less confidence and don’t start any new projects”, that would be useful...
Depression isn’t normal sadness. But if normal sadness lowers neural confidence a little, maybe depression is the pathological result of biological processes that lower neural confidence.

But I still don't understand why the behaviors we often see with depression - isolation, lack of energy - are 'longterm adaptive'. If a particular policy isn't working, I'd expect to see more energy going into experimentation.

[TK. Unfinished because I accidentally clicked submit and haven't finished editing the full comment]

I think you're asking too much of evolutionary theory here. Human bodies do lots of things that aren't longterm adaptive -- for example, if you stab them hard enough, all the blood falls out and they die. One could interpret the subsequent shock, anemia, etc. as having some fitness-enhancing purpose, but really the whole thing is a hard-to-fix bug in body design: if there were mutant humans whose blood more reliably stayed inside them, their mutation would quickly reach fixation in the early ancestral environment.

We understand blood and wound healing well enough to know that no such mutation can exist: there aren't any small, incrementally-beneficial changes which can produce that result. In the same way, it shouldn't be confusing that depression is maladaptive; you should only be confused if it's both maladaptive and easy to improve on. Intuitively it feels like it should be -- just pick different policies -- but that intuition isn't rooted in fine-grained understanding of the brain and you shouldn't let it affect your beliefs.

On a group selection level it might make lots more sense to have certain people get into states where they're very unlikely to procreate.

On of the finding of data-driven models of evolution of the last decades, is that group selection mostly isn't strong enough to create effects.

Hmm, which models?

My views come more from listening to experts and not from looking at specifics. When studying bioinformatics that's basically what they told us about the result of researching genetics with computer models. Afterwards when talking to experts, I also heard the same sentiments that most claims of group selection shouldn't be trusted.

I too have heard that group selection is not well believed it just seems so out of sync with my understanding of systems theory that I'm skeptical about taking people's word on it.

Since we can sequence genomes we know how many changes need to happen for the difference between organisms. We know that gene drift destroys features for which there isn't selection pressure to keep them like our ability to make our own Vitamin C.

It seems to me like the moving pieces that are needed for computer models are there, so I would trust experts opinions of people on the topic more strongly then would be warranted 30 years ago where opinions were mostly based on intellectual arguments.

Is the clearest "win" of a LW meme the rise of the term "virtue signaling"? On the one hand I'm impressed w/ how dominant it has become in the discourse, on the other... maybe our comparative advantage is creating really sharp symmetric weapons...

Do I understand it correctly that you believe the words "virtue signaling", or at least their frequent use, originates on LW? What is your evidence for this? (Do you have a link to what appears to be the first use?)

In my opinion, Robin Hanson is more likely suspect, because he talks about signaling all the time. But I would not be surprised to hear that someone else used that idiom first, maybe decades ago.

In other words, is there anything more than "I heard about 'virtue signaling' first on LW"?

https://twitter.com/esyudkowsky/status/910941417928777728

I remember seeing other claims/analysis of this but don't remember where

When EY says our community he means more then just LW but the whole rationalist diaspora as well towards which Robin Hanson can be counted.