All of Kristin Lindquist's Comments + Replies

Do you incorporate koans into your practice? Any favorites?

As a kid, I thought koans were cool and mysterious. As an adult in great need of the benefits of meditation, I felt like they were kinda silly. But then I did Henry Shukman's guided koan practice on the Waking Up app, during which I had the most profound experience of my meditative career. I was running outside and saw a woman playing fetch with her dog. In an instance, I had the realization that her love for her dog was identical to my love for my cat, which was in turn identical to her loving me ... (read more)

3lsusr
I've never used a koan intentionally. I've used exactly one, and that was by accident. Non-Buddhist Eliezer Yudkowsky called me a fake frequentist on Twitter. That acted on me as a koan, and it contributed to the train wreck that was my second insight cycle. Too much insight too quickly. That said, my local Zendo is Rinzai, and they do use koans sometimes. You are correct that the specific insight you're pointing at isn't mushin. Personally, I'd call it "interbeing". "Oneness" or "non-duality" might work too. I'm glad you got something out of my YouTube channel. I like how a camera makes it easier to better communicate certain kinds of attitudes compared to text. I have stuff I can improve too. Just last week, I had an insight into how I could be doing compassion better.

Perhaps music is another way to get rationalist ideas out into the main-ish stream.

A couple years ago Spotify started recommending lofi songs that included Alan Watts clips, like this: https://open.spotify.com/track/3D0gUUumDPAiy0BAK1RxbO?si=50bac2701cc14850

I had never heard of Watts (a bit surprising in retrospect), and these clips hooked my interest.

An appeal of this approach (spoken word + lofi) is that it is easier to understand, and puts greater emphasis on the semantic meaning over the musical sound.

--

PS. I love the chibi shoggoth

Answer by Kristin Lindquist10

I have weak intuitions for these problems, and in net they make me feel like my brain doesn't work very well. With that to disclaim my taste, FWIW I think your posts are some of the most interesting content on modern day LW. 

It'd be fun to hear you debate anthropic reasoning with Robin Hanson esp. since you invoke grabby aliens. Maybe you could invite yourself on to Robin & Agnes' podcast.

1Ape in the coat
Thank you for such a high praise! It was unexpected and quite flattering.

If System X is of sufficient complexity / high dimensionality, it's fair to say that there are many possible dimensional reductions, right? And not just globally better or worse options; instead, reductions that are more or less useful for a given context.

However, a shoggoth's theory-of-human-mind context would probably be a lot of like our context, so it'd make sense that the representations would be similar.

That's interesting re: LLMs as having "conceptual interpretability" by their very nature. I guess that makes sense, since some degree of conceptual interpretability naturally emerges given 1) sufficiently large and diverse training set, 2) sparsity constraints. LLMs are both - definitely #1, and #2 given regularization and some practical upper bounds on total number of parameters. And then there is your point - that LLMs are literally trained to create output we can interpret.

I wonder about representations formed by a shoggoth. For the most efficient predi... (read more)

2Seth Herd
I tend to think that more-or-less how we interpret the world is the simplest way to interpret it (at least for the mesa-scale of people and technologies. I doubt there's a dramatically different parsing that makes more sense. The world really seems to be composed of things made of things, that do things to things for reasons based on beliefs and goals. But this is an intuition. Clever compressions of complex systems, and better representations of things outside of our evolved expertise, like particle phsyics, sociology and economics, seem quite possible. Good citation; I meant to mention it. There's a nice post on it.

I've thought about this and your sequences a bit; it's a fascinating to consider given its 1000 or 10000 year monk nature.

A few thoughts that I forward humbly, since I have incomplete knowledge of alignment and only read 2-3 articles in your sequence:

  • I appreciate your eschewing of idealism (as in, not letting "morally faultless" be the enemy of "morally optimized"), and relatedly, found some of your conclusions disturbing. But that's to be expected, I think!
  • While "one vote per original human" makes sense given your arguments, its moral imperfection makes m
... (read more)
1RogerDearnaley
The issue here is that we need to avoid it being cheap/easy to create new voters/moral patients to avoid things like ballot stuffing or easily changing the balance/outcome of utility optimization processes. However, the specific proposal I came up with for avoiding this (one vote per original biological human) may not be the best solution (or at least, not all of it). Depending on the specifics of the society, technologies, and so forth, there may be other better solutions I haven't thought of. For example, if you make two uploads of the same human, they each have 1000 years of different subjective time, so become really quite different, and if the processing cost of doing this isn't cheap/easy enough that such copies can be mass-produced, then at some point it would make sense to give them separate moral weight. I should probably update that post a little to be clearer that what I'm suggesting is just one possible solution to one specific moral issue, and depends on the balance of different concerns. In some sense it's more a prediction than a necessity. If an AI is fully, accurately aligned, so that it only cares about what the humans want/the moral weight of the humans, and has no separate agenda of its own, then (by definition) it won't want any moral weight applied to itself. To be (fully) aligned, an AI needs to be selfless, i.e. to view its own interests only as instrumental goals to help you keep doing good things for the humans it cares about. If so, then it should actively campaign not to be given any moral weight by others. However, particularly if the AI is not one of the post powerful ones in the society (and especially if there are ones significantly more powerful than it doing something resembling law enforcement), then we may not need it to be fully, accurately aligned. For example, if the AI has only around human capacity, then even if it isn't very well aligned (as long as it isn't problematically taking advantage of the various advantages of bei
Answer by Kristin Lindquist10

LW, along with Astral Codex Ten, are the best places on the internet. Lately LW tops the charts for me, perhaps because I've made it through Scott's canon but not LW's. As a result, my experience on LW is more about the content than the meta and community. Just coming here, I don't stumble across much evidence of conflict within this community - I only learned about it after friending various rationalists on FB such as Duncan (which btw I really like having rationalists in my FB feed, which does give me a sense of community and belongingness... perhaps the... (read more)

+1

I internalized the value to apologize proactively, sincerely, specifically and without any "but". While I recommend it from a virtue ethics perspective, I'd urge starry-eyed green rationalists to be cautious. Here are some potential pitfalls:

- People may be confused by this type of apology and conclude that you are neurotic or insincere. Both can signal low status if you lack unambiguous status markers or aren't otherwise effectively conveying high status.
- If someone is an adversary (whether or not you know it), apologies can be weaponized. As a conscie... (read more)

I'll be attending, probably with a +1.

Not an answer but a related question: is habituation perhaps a fundamental dynamic in an intelligent mind? Or did the various mediators of human mind habituation (e.g. downregulation of dopamine receptors) arise from evolutionary pressures?

I'm reading this for the first time today. It'd be great if more biases were covered this way. The "illusion of transparency" one is eerily close to what I've thought so many times. Relatedly, sometimes I do succeed at communicating, but people don't signal that they understand (or not in a way I recognize). Thus sometimes I only realize I've been understood after someone (politely) asks that I stop repeating myself, mirroring back to me what I had communicated. This is a little embarrassing, but also a relief - once I know I've been understood, I can finally let go.

I think kindness is a good rule for rationalists, because unkindness is rhetorically OP yet so easily rationalized ("i'm just telling it like it is, y'all" while benefitting – again, rhetorically – from playing the offensive).

Your implication that Aella is not speaking, writing or behaving sanely is, frankly, hard to fathom. You may disagree with her; you may consider her ideas and perspectives incomplete; but to say she has not met the standards of sanity?

She speaks about an incredibly painful and personal issue with remarkable sanity and analytical dista... (read more)

I think kindness is a good rule for rationalists, because unkindness is rhetorically OP yet so easily rationalized (“i’m just telling it like it is, y’all” while benefitting – again, rhetorically – from playing the offensive).

Accusations of unkindness are also, as you say, “rhetorically OP”… best not to get into litigating how “kind” anyone is being.

Your implication that Aella is not speaking, writing or behaving sanely is, frankly, hard to fathom. You may disagree with her; you may consider her ideas and perspectives incomplete; but to say she has no

... (read more)

"Honestly, this is a terrible post. It describes a made-up concept that, as far as I can tell, does not actually map to any real phenomenon [...]" - if I am not mistaken, LessWrong contains many posts on "made-up concepts" - often newly minted concepts of interest to the pursuit of rationality. Don't the rationalist all-stars like Scott Alexander and Yudkowsky do this often?

As a rationalist type who has also experienced abuse, I value Aella's attempt to characterize the phenomenon.

Years of abuse actually drove my interest in rationality and epistemology. M... (read more)

I’m sorry to hear about the things that happened to you.

However, neither that, nor Aella’s experiences, change anything about what I wrote…

I don’t know if you’ll find this persuasive in the slightest. But if you do, even a tiny bit, maybe you could chill out on the “this is a terrible post” commentary. To invoke SCC (though I know those aren’t the rules here), that comment isn’t true, kind OR necessary.

Thankfully, that rule does not apply here, because it’s a really bad rule.

(This aside from the fact that my comment is of course true, or at least I cla... (read more)

Answer by Kristin Lindquist*170

Already many good answers, but I want to reinforce some and add others.

1. Beware of multiplicity - does the experiment include a large number of hypotheses, explicitly or implicitly? Implicit hypotheses include "Does the intervention have an effect on subjects with attributes A, B or C?" (subgroups) and "Does the intervention have an effect that is shown by measuring X, Y or Z?" (multiple endpoints). If multiple hypotheses were tested, were the results for each diligently reported? Note that multiplicity can be sneaky and you're of... (read more)

1waveman
On bias see here https://www.bmj.com/content/335/7631/1202 and references. There is a lot of research about this. Note also that you do not even need to bias a particular researcher, just fund the researchers producing the answers you like, or pursuing the avenues you are interested in e.g. Coke's sponsorship of exercise research which produces papers suggesting that perhaps exercise is the answer. One should not simply dismiss a study because of sponsorship, but be aware of what might be going on behind the scenes. And also be aware that people are oblivious to the effect that sponsorship has on them. One study of primary care doctors found a large effect on prescribing from free courses, dinners, etc, but the doctors adamantly denied any impact. The suggestions of things to look for are valid and useful but often you just don't know what actually happened.