Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Self10

There may have been other, unmentioned optimization targets that also need eloquence

Predictions:

  • (75%) Groups who successfully[1] adopt trust technology will economically and politically outcompete the rest of their respective societies rather quickly (less than 10 years).
  • The efficiency gains feasibly up for grabs in the first 15 years compared to statusquo are over 100% (75%) or over 400% (50%).
  • (66%) Society-wide adoption of trustbuilding tech is a practical path / perhaps the only practical path towards sane politics in general and sane AI politics in particular.

The whole gestalt of why this is a huge affordance seems self-evident to me, it's a cognitive weakness of mine to often not know which parts of my thinking need more words written out loud to be legible.

But one intuition is: Regular "natural" human cultures are accidental products sampled from environments where deception-heavy strategies are dominant, and this imposes large deadweight costs on all pursuits of value, including economic value, happiness, friendship, and morality. Explicitly: Most of our cognition goes into deceiving others, and the density of useful acts could be multiple times higher.

  1. ^

    i.e. build mutual understandings at least to, but ideally surpassing, the point of family-like intimacy / feeling the others as extensions of oneself

Self10

I'm not eloquent enough to express how important I think this is.

Self21

I feel like such intuitions could be developed. - I'm more uncertain where I would use this skill.

Though given how OOD it is there could be significant alpha up for grabs

(Q: Where would X-Ray vision for cluster structures in 5-dimensional space be extraordinarily useful?)

Self10

Hmm. Yeah. It gets difficult to display points with the same XY coordinates and different RGB coordinates

Self60

With colors you can in principle display data in 5-dimensional space on a 2D medium without flattening.

Bottlenecks (cognitive):
- intuitively knowing the RGB values of colors you're seeing
- intuitively perceiving color differences as 3-dimensional distances

Feasible? Useful?

Self1-1

Latest in Shit Claude Says:

Credibility Enhancing Displays (CREDs)
Ideas spread not through their inherent quality but through costly displays of commitment by believers. Words are cheap; actions that would be irrational if the belief were false are persuasive.

Predictive angle: The spread of beliefs correlates more strongly with observable sacrifices made by believers than with evidence or argument quality.

Novel implication: Rationalists often fail to spread ideas despite strong arguments because they don't engage in sufficient credibility enhancing displays. Effective belief transmission requires demonstration through personal cost[1].

The easiest way for rats to do this more may be "retain nonchalant confidence when talking about things you're certain are true, even in the face of audience skepticism"

  1. ^

    I think the "personal cost" angle is mistaken. Costly Signaling only requires the act would be costly if you didn't posses the trait. 

Self*30

Aspies certainly seem to do this less!

You mean, like him as a blogger? Or as a person in real life?

The latter? Like, I subconsciously parse his blogging voice not unlike as if it were a person in my tribal surroundings, and I like/admire/relate to that virtual person, and I think this is what causes some aspect of persuasion

I mean yes it's embarrassing, but it's what I see in myself and what seems to be most consistent with what everyone else is doing, certainly more consistent than what they claim they're doing. 

E.g. it seems rare for someone who actively dis-appreciates the sequences to not also dislike Eliezer for what seems like vibes-based reasons more than content-based reasons

But then again, all models are false!

If I peer into my own past, where arguably I was more autistic than today, I can see that my standards for admiration seem to have been much stricter. I basically wouldn't ever copy role models because there were no role models to copy. This may be the shape of an important caveat

Self30

They do, but the explanation proposed here matches everything I know most exactly and simply.

E.g. it became immediately clear that the sequences wouldn't work nearly as well for me if I didn't like Eliezer

Or the way fashion models are of course not selected for attractiveness but for more mimetic-copying-inducing highstatus traits like height/confidence/presence/authenticity

and others

And yeah not all of the Claude examples are good, I hadn't cherrypicked

Self*10

More thoughts that may or may not be directly relevant

  • What's missing from my definition is that deception happens solely via "stepping in front of the camera", i.e. via the regular sensory channels of the deceived optimizer, ie brainwashing or directly modifying memory is not deception
  • From this follows to deceive is to either cause a false pattern recognition or to prevent a correct one, and for this you indeed need familiarity with the victim's perceptual categories

I'd like to say more re: hostile telepaths or other deception frameworks but am unsure what your working models are

Self45

I'd say weirdness is about not being predictable

Perhaps along some generalized conformity axis - being perceived as a potential risk to the social order.

Load More