Perhaps music is another way to get rationalist ideas out into the main-ish stream.
A couple years ago Spotify started recommending lofi songs that included Alan Watts clips, like this: https://open.spotify.com/track/3D0gUUumDPAiy0BAK1RxbO?si=50bac2701cc14850
I had never heard of Watts (a bit surprising in retrospect), and these clips hooked my interest.
An appeal of this approach (spoken word + lofi) is that it is easier to understand, and puts greater emphasis on the semantic meaning over the musical sound.
--
PS. I love the chibi shoggoth
I have weak intuitions for these problems, and in net they make me feel like my brain doesn't work very well. With that to disclaim my taste, FWIW I think your posts are some of the most interesting content on modern day LW.
It'd be fun to hear you debate anthropic reasoning with Robin Hanson esp. since you invoke grabby aliens. Maybe you could invite yourself on to Robin & Agnes' podcast.
If System X is of sufficient complexity / high dimensionality, it's fair to say that there are many possible dimensional reductions, right? And not just globally better or worse options; instead, reductions that are more or less useful for a given context.
However, a shoggoth's theory-of-human-mind context would probably be a lot of like our context, so it'd make sense that the representations would be similar.
That's interesting re: LLMs as having "conceptual interpretability" by their very nature. I guess that makes sense, since some degree of conceptual interpretability naturally emerges given 1) sufficiently large and diverse training set, 2) sparsity constraints. LLMs are both - definitely #1, and #2 given regularization and some practical upper bounds on total number of parameters. And then there is your point - that LLMs are literally trained to create output we can interpret.
I wonder about representations formed by a shoggoth. For the most efficient predi...
I've thought about this and your sequences a bit; it's a fascinating to consider given its 1000 or 10000 year monk nature.
A few thoughts that I forward humbly, since I have incomplete knowledge of alignment and only read 2-3 articles in your sequence:
LW, along with Astral Codex Ten, are the best places on the internet. Lately LW tops the charts for me, perhaps because I've made it through Scott's canon but not LW's. As a result, my experience on LW is more about the content than the meta and community. Just coming here, I don't stumble across much evidence of conflict within this community - I only learned about it after friending various rationalists on FB such as Duncan (which btw I really like having rationalists in my FB feed, which does give me a sense of community and belongingness... perhaps the...
+1
I internalized the value to apologize proactively, sincerely, specifically and without any "but". While I recommend it from a virtue ethics perspective, I'd urge starry-eyed green rationalists to be cautious. Here are some potential pitfalls:
- People may be confused by this type of apology and conclude that you are neurotic or insincere. Both can signal low status if you lack unambiguous status markers or aren't otherwise effectively conveying high status.
- If someone is an adversary (whether or not you know it), apologies can be weaponized. As a conscie...
I'll be attending, probably with a +1.
Not an answer but a related question: is habituation perhaps a fundamental dynamic in an intelligent mind? Or did the various mediators of human mind habituation (e.g. downregulation of dopamine receptors) arise from evolutionary pressures?
I'm reading this for the first time today. It'd be great if more biases were covered this way. The "illusion of transparency" one is eerily close to what I've thought so many times. Relatedly, sometimes I do succeed at communicating, but people don't signal that they understand (or not in a way I recognize). Thus sometimes I only realize I've been understood after someone (politely) asks that I stop repeating myself, mirroring back to me what I had communicated. This is a little embarrassing, but also a relief - once I know I've been understood, I can finally let go.
I think kindness is a good rule for rationalists, because unkindness is rhetorically OP yet so easily rationalized ("i'm just telling it like it is, y'all" while benefitting – again, rhetorically – from playing the offensive).
Your implication that Aella is not speaking, writing or behaving sanely is, frankly, hard to fathom. You may disagree with her; you may consider her ideas and perspectives incomplete; but to say she has not met the standards of sanity?
She speaks about an incredibly painful and personal issue with remarkable sanity and analytical dista...
I think kindness is a good rule for rationalists, because unkindness is rhetorically OP yet so easily rationalized (“i’m just telling it like it is, y’all” while benefitting – again, rhetorically – from playing the offensive).
Accusations of unkindness are also, as you say, “rhetorically OP”… best not to get into litigating how “kind” anyone is being.
...Your implication that Aella is not speaking, writing or behaving sanely is, frankly, hard to fathom. You may disagree with her; you may consider her ideas and perspectives incomplete; but to say she has no
"Honestly, this is a terrible post. It describes a made-up concept that, as far as I can tell, does not actually map to any real phenomenon [...]" - if I am not mistaken, LessWrong contains many posts on "made-up concepts" - often newly minted concepts of interest to the pursuit of rationality. Don't the rationalist all-stars like Scott Alexander and Yudkowsky do this often?
As a rationalist type who has also experienced abuse, I value Aella's attempt to characterize the phenomenon.
Years of abuse actually drove my interest in rationality and epistemology. M...
I’m sorry to hear about the things that happened to you.
However, neither that, nor Aella’s experiences, change anything about what I wrote…
I don’t know if you’ll find this persuasive in the slightest. But if you do, even a tiny bit, maybe you could chill out on the “this is a terrible post” commentary. To invoke SCC (though I know those aren’t the rules here), that comment isn’t true, kind OR necessary.
Thankfully, that rule does not apply here, because it’s a really bad rule.
(This aside from the fact that my comment is of course true, or at least I cla...
Already many good answers, but I want to reinforce some and add others.
1. Beware of multiplicity - does the experiment include a large number of hypotheses, explicitly or implicitly? Implicit hypotheses include "Does the intervention have an effect on subjects with attributes A, B or C?" (subgroups) and "Does the intervention have an effect that is shown by measuring X, Y or Z?" (multiple endpoints). If multiple hypotheses were tested, were the results for each diligently reported? Note that multiplicity can be sneaky and you're of...
Do you incorporate koans into your practice? Any favorites?
As a kid, I thought koans were cool and mysterious. As an adult in great need of the benefits of meditation, I felt like they were kinda silly. But then I did Henry Shukman's guided koan practice on the Waking Up app, during which I had the most profound experience of my meditative career. I was running outside and saw a woman playing fetch with her dog. In an instance, I had the realization that her love for her dog was identical to my love for my cat, which was in turn identical to her loving me ... (read more)