With colors you can in principle display data in 5-dimensional space on a 2D medium without flattening.
Bottlenecks (cognitive):
- intuitively knowing the RGB values of colors you're seeing
- intuitively perceiving color differences as 3-dimensional distances
Feasible? Useful?
I think are possible (and smth like this is already being used in complex functions visualizations). Not sure if you could display i.e. 5d hypercube this way (by the same reason there are no function which looks like a square)
Hmm. Yeah. It gets difficult to display points with the same XY coordinates and different RGB coordinates
The coordinates x, y, R, G, B are independent, so it should be possible. I think the problem is just our intuition, which isn't optimized for perceiving color like three distances in space, or even like three separate values at all.
I feel like such intuitions could be developed. - I'm more uncertain where I would use this skill.
Though given how OOD it is there could be significant alpha up for grabs
(Q: Where would X-Ray vision for cluster structures in 5-dimensional space be extraordinarily useful?)
Had a minor braincoom discovering Mimetic Theory
Best model/compression I took away is a mental image evoked by "Desire is triangular, not linear" depicting how desires are created via copying
Claude 3.7 explains some basics:
Desire is triangular, not linear - We don't want things directly; we want what others want. Every desire has a hidden "model" we're unconsciously imitating.
Conversion happens through the model - We convert to a new worldview by imitating someone we admire, not through intellectual persuasion. Reason follows mimetic conversion.
The interdividual self - Girard rejects the autonomous individual entirely. The "self" is actually a collection of desires borrowed from others. What we call "personality" is just the unique pattern of our imitations.
Common Examples:
- Kids fighting over the same toy while ignoring identical ones
- Fashion trends spreading through social groups
- Career paths chosen because respected peers chose them
- Romantic triangles where someone becomes attractive once they're dating someone else
- Consumer frenzies (iPhones, limited editions) driven by visible queues and scarcity
- Gentrification patterns where neighborhoods become desirable because the "right people" moved there
- Academic research clusters forming around suddenly "hot" topics
Subtler Manifestations:
- The desire for "authenticity" itself (ironic since it's mimetically transmitted)
- Self-improvement goals based on what's celebrated in your social circle
- Political opinions adopted from respected figures in your group
- Food preferences that align with your aspirational identity group
- Hobbies pursued because they signal belonging to certain communities
- Creative outputs that unconsciously mirror admired creators
- Parenting styles that copy other parents you respect
Some of these examples have alternative explanations.
neighborhoods become desirable because the "right people" moved there
Even if you imagine a hypothetical person 100% resistant to copying desire, the value of a neighborhood does depend on the kind of people who live there.
They do, but the explanation proposed here matches everything I know most exactly and simply.
E.g. it became immediately clear that the sequences wouldn't work nearly as well for me if I didn't like Eliezer.
Or the way fashion models are of course not selected for attractiveness but for more mimetic-copying-inducing highstatus traits like height/confidence/presence/authenticity
and others
And yeah not all of the Claude examples are good, I hadn't cherrypicked
it became immediately clear that the sequences wouldn't work nearly as well for me if I didn't like Eliezer
You mean, like him as a blogger? Or as a person in real life?
If the former, isn't causality the other way round? I mean, I like Eliezer as a blogger because he wrote the Sequences. So it would sound weird to me to say: "I admire Eliezer as a blogger a lot because he wrote some amazing articles on rationality... and Girard's theory predicts that therefore I will like his articles... which is true!"
(We could nitpick that some things that I like about Eliezer's style are orthogonal to whether his points about rationality are true, but that already has a name: halo effect.)
I am not trying to contradict your experience, but it seems to me that my experience (with the Sequences) does not match this model at all. Or other things that I think about.
My friends used to play Magic the Gathering cards, this has never appealed to me. I liked sci-fi, but I was reading sci-fi books long before I have met another person who did. I learned Esperanto from a textbook long before I met another Esperanto speaker. My wife loves skiing and opera, that has no effect on me. Seems like I am quite resistant to copying others. (Is that a part of being on the autistic spectrum? Maybe I should file Girard's theory under "this is what normies do"; no offense meant.)
Aspies certainly seem to do this less!
You mean, like him as a blogger? Or as a person in real life?
The latter? Like, I subconsciously parse his blogging voice not unlike as if it were a person in my tribal surroundings, and I like/admire/relate to that virtual person, and I think this is what causes some aspect of persuasion
I mean yes it's embarrassing, but it's what I see in myself and what seems to be most consistent with what everyone else is doing, certainly more consistent than what they claim they're doing.
E.g. it seems rare for someone who actively dis-appreciates the sequences to not also dislike Eliezer for what seems like vibes-based reasons more than content-based reasons
But then again, all models are false!
If I peer into my own past, where arguably I was more autistic than today, I can see that my standards for admiration seem to have been much stricter. I basically wouldn't ever copy role models because there were no role models to copy. This may be the shape of an important caveat
There's a subjective 15% chance the mindstate switch was instead placebo-induced
Given the above, will antiandrogens make me more introverted? And if so, are there cognitive benefits to introversion? (I think so)
2 days ago started taking the supposed mild but statistically significant antiandrogens and OTC supplements Reishi + Chasteberry + Spearmint
I'll be amused if that before long ends my "frequent public posting" streak
You mean this substance? https://en.wikipedia.org/wiki/Mesembrine
Do you have a recommended brand, or places to read more about it?
Yes. The product I bought identifies itself as "Sceletium tortuosum".
I've only tried 1 brand/product, and haven't seen any outstanding sources on it either, so I can't offer much guidance there.
I can anecdotally note that the effects seem quite strong for a legal substance at 0.5g, that it has short term effects + potentially also weaker long term effects (made me more relaxed? hard to say) (probs comparable to MDMA used in trauma therapy)
Insightful: https://takingchildrenseriously.com/the-evolution-of-culture/
Interesting, this implies a good deceiver has the power to determine another agent's model and signal in a way that is aligned with the other's model. I previously read an article on hostile telepaths https://www.lesswrong.com/posts/5FAnfAStc7birapMx/the-hostile-telepaths-problem which may be pertinent.
More thoughts that may or may not be directly relevant
I'd like to say more re: hostile telepaths or other deception frameworks but am unsure what your working models are
Latest in Shit Claude Says:
Credibility Enhancing Displays (CREDs)
Ideas spread not through their inherent quality but through costly displays of commitment by believers. Words are cheap; actions that would be irrational if the belief were false are persuasive.Predictive angle: The spread of beliefs correlates more strongly with observable sacrifices made by believers than with evidence or argument quality.
Novel implication: Rationalists often fail to spread ideas despite strong arguments because they don't engage in sufficient credibility enhancing displays. Effective belief transmission requires demonstration through personal cost[1].
The easiest way for rats to do this more may be "retain nonchalant confidence when talking about things you're certain are true, even in the face of audience skepticism"
I think the "personal cost" angle is mistaken. Costly Signaling only requires the act would be costly if you didn't posses the trait.
(Vague musing)
There's a type of theory I'd call a "Highlevel Index" into an information body, for example, Predictive Processing is a highlevel index for Neurology, or Natural Selection is a highlevel index for Psychology, or Game Theory and Signaling Theory are highlevel indexes for all kinds of things.
They're tools for delving into information bodies. They give you good taste for lower level theories, a better feel for what pieces of knowledge are and aren't predictive. If you're like me, and you're trying to study Law or Material Science, but you got no highlevel indexes for these domains, you're left standing there, lost, without evaluability, in front of a vast sea of lower-level more detailed knowledge. You could probs make iterative bottom up progress by layer for layer absorbing detail-info and synthesizing or discovering higher-level theories from what you've seen, but that's an unknown and unknowable-feeling amount of work. Standing at the foot of the mountain, you're not feeling it. There's no affordance waiting to be grasped.
One correct framing here is that I'm whining because not all learning is easy.
But also: I do believe the solutionspace ceilng here is much higher than we notice, and that marginal exploration is worth some opportunity cost.
So!
Besides what's common knowledge in rat culture, what are your fave highlevel indexes?
What non-redundant authors besides Eliezer & co talk a lot in highlevel indexes?
Are there established or better verbal pointers to highlevel indexes?
For this I could write an app that performs a gradual translation to chinese on the .epub file of a fiction I'm currently addicted to
Overly optimistic ballpark estimate is "800k words of text are enough to learn recognize 4k chinese characters"
Evidence in favour:
Evidence against:
Where does the value of knowledge come from? Why is compressing that knowledge adding to that value? Are you referring to knowledge in general or thinking about knowledge within a specific domain?
In my personal experience, finding an application for knowledge always outstrips the value of new knowledge.
For example, I may learn the name of every single skipper of a Americas Cup yacht over the entire history of the event: but that would not be very valuable to me as there is no opportunity to exploit it. I may even 'compress' it for easy recall by means of a humorous menomic, like Bart Simpson's mnemonic for Canada's Governor General[1]s, or Robert Downey Jr's technique of turning the first letter of every one of his lines in a scene into an acrostic. However unless called upon to recite a list of America's Cup Skippers, Canada's first Governor Generals, or the dialogue in a Robert Downey Jr. film - when does this compression add any value?
Indeed, finding new applications for knowledge we already have always has the advantage of the opportunity cost against acquiring new knowledge. For example, every time an app or a website changes it's UI, there is always a lag or delay in accomplishing the same task as I now need to reorient or even learn a new procedure for accomplishing the same task.
"Clowns Love Hair-Cuts, so Should Lee Marvin's Valet" - Charles, Lisgar, Hamilton, Campbell, Landsdowne, Stanley (Should-ley), Murray-Kynynmound, and 'valet' rhymes with "Earl Grey" is my best guess.
The way I put that may have been overly obscure
But I've come to refer in my mind to the way the brain does chunking of information and noticing patterns and parallels in it for easier recall and use as just Compression.
Compression is what happens when you notice that 2 things share the same structure, and your brain just kinda fuses the shared aspects of the mental objcts together into a single thing. Compression = Abstraction = Analogy = Metaphor. Compression = Eureka moments. And the amazing thing is the brain performs cognition on compressed data just as fast as on original data, effectively increasing your cognitive speed.
For example, I think there's large value in merging as much of your everyday observational data of humans as feasible together into abstracted psychology concepts, and I wanna understand models like BigFive (as far as they're correct) much better on intuitive levels.
What about this one:
"Hivemind" is best characterized as a state of zero adversarial behavior.