Similar to other people's shortform feeds, short stuff that people on LW might be interested in, but which doesn't feel like it's worth a separate post. (Will probably be mostly cross-posted from my Facebook wall.)
Similar to other people's shortform feeds, short stuff that people on LW might be interested in, but which doesn't feel like it's worth a separate post. (Will probably be mostly cross-posted from my Facebook wall.)
I think the term "AGI" is a bit of a historical artifact, it was coined before the deep learning era when previous AI winters had made everyone in the field reluctant to think they could make any progress toward general intelligence. Instead, all AI had to be very extensively hand-crafted to the application in question. And then some people felt like they still wanted to do research on what the original ambition of AI had been, and wanted a term that'd distinguish them from all the other people who said they were doing "AI".
So it was a useful term to distinguish yourself from the very-narrow AI research back then, but now that AI systems are already increasingly general, it doesn't seem like a very useful concept anymore and it'd be better to talk in terms of more specific cognitive capabilities that a system has or doesn't have.
Every now and then in discussions of animal welfare, I see the idea that the "amount" of their subjective experience should be weighted by something like their total amount of neurons. Is there a writeup somewhere of what the reasoning behind that intuition is? Because it doesn't seem intuitive to me at all.
From something like a functionalist perspective, where pleasure and pain exist because they have particular functions in the brain, I would not expect pleasure and pain to become more intense merely because the brain happens to have more neurons. Rather I would expect that having more neurons may 1) give the capability to experience anything like pleasure and pain at all 2) make a broader scale of pleasure and pain possible, if that happens to be useful for evolutionary purposes.
For a comparison, consider the sharpness of our senses. Humans have pretty big brains (though our brains are not the biggest), but that doesn't mean that all of our senses are better than those of all the animals with smaller brains. Eagles have sharper vision, bats have better hearing, dogs have better smell, etc..
Humans would rank quite well if you took the average of all of our senses - we're el...
To me the core of neuron counting as an intuition is that all living beings seem to have a depth to their reactions that scales with the size of their mind. There's a richness to a human mind in its reactions to the world which other animals don't have, just as dogs have a deeper interaction with everything than insects do. This is pretty strongly correlated with our emotions for why/when we care about creatures, how much we 'recognize' their depth. This is why people are most often interested when learning that certain animals have more depth than we might intuitively think.
As for whether there is an article, I don't know of any that I like, but I'll lay out some thoughts. This will be somewhat rambly, in part to try to give some stronger reasons, but also related ideas that aren't spelled out enough.
One important consideration I often have to keep in mind in these sorts of discussions, is that when we evaluate moral worth, we do not just care about instantaneous pleasure/pain, but rather an intricate weighting of hundreds of different considerations. This very well may mean that we care about weighting by richness of mind, even if we determine that a scale would say that two be...
I doubt that anyone even remembers this, but I feel compelled to say it: there was some conversation about AI maybe 10 years ago, possibly on LessWrong, where I offered the view that abstract math might take AI a particularly long time to master compared to other things.
I don't think I ever had a particularly good reason for that belief other than a vague sense of "math is hard for humans so maybe it's hard for machines too". But formally considering that prediction falsified now.
Relative to 10 (or whatever) years ago? Sure I've made quite a few of those already. By this point it'd be hard to remember my past beliefs well enough to make a list of differences.
Due to o3 specifically? I'm not sure, I have difficulty telling how significant things like ARC-AGI are in practice, but the general result of "improvements in programming and math continue" doesn't seem like a huge surprise by itself. It's certainly an update in favor of the current paradigm continuing to scale and pay back the funding put into it, though.
Occasionally I find myself nostalgic for the old, optimistic transhumanism of which e.g. this 2006 article is a good example. After some people argued that radical life extension would increase our population too much, the author countered that oh, that's not an issue, here are some calculations showing that our planet could support a population of 100 billion with ease!
In those days, the ethos seemed to be something like... first, let's apply a straightforward engineering approach to eliminating aging, so that nobody who's alive needs to worry about dying from old age. Then let's get nanotechnology and molecular manufacturing to eliminate scarcity and environmental problems. Then let's re-engineer the biosphere and human psychology for maximum well-being, such as by using genetic engineering to eliminate suffering and/or making it a violation of the laws of physics to try to harm or coerce someone.
So something like "let's fix the most urgent pressing problems and stabilize the world, then let's turn into a utopia". X-risk was on the radar, but the prevailing mindset seemed to be something like "oh, x-risk? yeah, we need to get to t...
I just recently ran into someone posting this on Twitter and it blew my mind:
An intriguing feature of twin studies: anything a parent does to individualize for a child is non-shared-environment (NSE) rather than shared environment (SE, viz. ”parenting”). The more a parent optimizes for individual agency, the less “parenting” will be attributed.
Claude at least basically confirmed this interpretation (it says it is "slightly overstated" but then gives a "clarification" that doesn't change it). My reaction was "wait WHAT" - doesn't that completely invalidate the whole "parenting doesn't significantly matter for future life outcomes" claim?
Because that claim is based on equating "parenting" with "shared environment". But if you equate "parenting" with just "what are the ways in which parents treat each child identically" then it seems that of course that will only have a small effect.
I for one know that I interact very differently with children with different personalities! (Or, for that matter, with adults with different personalities.) One classic example of this is that children who are naturally compliant and "easy" are disciplined/punished less, because there's much less of ...
Like I always say, the context in which you’re bringing up heritability matters. It seems that the context here is something like:
Some people say shared environment effects are ≈0 in twin & adoption studies, therefore we should believe “the bio-determinist child-rearing rule-of-thumb”. But in fact, parenting often involves treating different kids differently, so ‘shared environment effects are ≈0’ is irrelevant, and therefore we should reject “the bio-determinist child-rearing rule-of-thumb” after all.
If that’s the context, then I basically disagree. Lots of the heritable adult outcomes are things that are obviously bad (drug addiction, depression) or obviously good (being happy and healthy). Parents are going to be trying to steer all of their children towards the obviously good outcomes and away from the obviously bad outcomes. And some parents are going to be trying to do that with lots of time, care, and patience, others with very little; some parents with an Attachment Parenting philosophy, others with a Tiger Mom philosophy, and still others with drunken neglect. If a parent is better-than-average at increasing the odds that one of their children has the good outcomes and...
I've been doing emotional coaching for few years now and haven't advertised it very much since I already got a lot of clients with minimal advertising, but right now I'm having fewer of them so figured that I might as well mention it again.
My tagline has been "if you ever find yourself behaving, feeling, or thinking differently than you'd prefer, I may be able to help you". Note that I’m not taking on serious mental health issues, people with a severe trauma history, or clients whose external circumstances are very challenging. That said, things like mild to moderate depression, motivational problems, or social anxieties do fall into the umbrella of things I may be able to help with.
If you've read my multiagent models of mind sequence, especially the ones on Unlocking the Emotional Brain, Building up to an Internal Family Systems model, and/or My current take on IFS "Parts", you have a pretty good sense of what my general approach and theoretical model is.
In my experience, clients are the most likely to find me useful if they've tried something like Focusing or IFS a little bit before and found it promising, or at least feel like they have some kind of intuitive access to their emo...
Something I think about a lot when I see hypotheses based on statistical trends of somewhat obscure variables: I've heard it claimed that at one point in Finland, it was really hard to get a disability pension because of depression or other mental health problems, even though it was obvious to many doctors that their patients were too depressed to work. So then some doctors would diagnose those people with back pain instead, since it sounded more like a "real" condition while also being impossible to disprove before ultrasound scans got more common.
I don't know how big that effect was in practice. But I could imagine a world where it was significant and where someone noticed a trend of back pain diagnoses getting less common while depression diagnoses got more common, and postulating some completely different explanation for the relationship.
More generally, quite a few statistics are probably reporting something different from what they seem to be about. And unless you have deep knowledge about the domain in question, it'll be impossible to know when that's the case.
Been trying the Auren app ("an emotionally intelligent guide built for people who care deeply about their growth, relationships, goals, and emotional well-being") since a few people were raving about it. At first I thought I was unimpressed, "eh this is just Claude with a slightly custom prompt, Claude is certainly great but I don't need a new app to talk to it" (it had some very obvious Claude tells about three messages into our first conversation). Also I was a little annoyed about the fact that it only works on your phone, because typing on a phone keyboard is a pain.
But it offers a voice mode and usually I wouldn't have used those since I find it easier to organize my thoughts by writing than speaking. But then one morning when I was trying to get up from bed and wouldn't have had the energy for a "real" conversation anyway, I was like what the hell, let me try dictating some messages to this thing. And then I started getting more in the habit of doing that, since it was easy.
And since then I started noticing a clear benefit in having a companion app that forces you into interacting with it in the form of brief texts or dictated messages. The kind of conversations where I would...
Here's a mistake which I've sometimes committed and gotten defensive as a result, and which I've seen make other people defensive when they've committed the same mistake.
Take some vaguely defined, multidimensional thing that people could do or not do. In my case it was something like "trying to understand other people".
Now there are different ways in which you can try to understand other people. For me, if someone opened up and told me of their experiences, I would put a lot of effort into really trying to understand their perspective, to try to understand how they thought and why they felt that way.
At the same time, I thought that everyone was so unique that there wasn't much point in trying to understand them by any *other* way than hearing them explain their experience. So I wouldn't really, for example, try to make guesses about people based on what they seemed to have in common with other people I knew.
Now someone comes and happens to mention that I "don't seem to try to understand other people".
I get upset and defensive because I totally do, this person hasn't understood me at all!
And in one sense, I'm right - i...
The essay "Don't Fight Your Default Mode Network" is probably the most useful piece of productivity advice that I've read in a while.
Basically, "procrastination" during intellectual work is actually often not wasted time, but rather your mind taking the time to process the next step. For example, if I'm writing an essay, I might glance at a different browser tab while I'm in the middle of writing a particular sentence. But often this is actually *not* procrastination; rather it's my mind stopping to think about the best way to continue that sentence. And this turns out to be a *better* way to work than trying to keep my focus completely on the essay!
Realizing this has changed my attention management from "try to eliminate distractions" to "try to find the kinds of distractions which don't hijack your train of thought". If I glance at a browser tab and get sucked into a two-hour argument, then that still damages my workflow. The key is to try to shift your pattern towards distractions like "staring into the distance for a moment", so that you can take a brief pause without getting pulled into anything di...
Could you please clarify what parts of the making of the above comment were done by a human being, and what parts by an AI?
I only now made the connection that Sauron lost because he fell prey to the Typical Mind Fallacy (assuming that everyone's mind works the way your own does). Gandalf in the book version of The Two Towers:
...The Enemy, of course, has long known that the Ring is abroad, and that it is borne by a hobbit. He knows now the number of our Company that set out from Rivendell, and the kind of each of us. But he does not yet perceive our purpose clearly. He supposes that we were all going to Minas Tirith; for that is what he would himself have done in our place. And ac
I was thinking of a friend and recalled some pleasant memories with them, and it occurred to me that I have quite a few good memories about them, but I don't really recall them very systematically. I just sometimes remember them at random. So I thought, what if I wrote down all the pleasant memories of my friend that I could recall?
Not only could I then occasionally re-read that list to get a nice set of pleasant memories, that would also reinforce associations between them, making it more likely that recalling one - or just being reminded of my frie...
For a few weeks or so, I've been feeling somewhat amazed at how much less suffering there seems to be associated with different kinds of pain (emotional, physical, etc.), seemingly as a consequence of doing meditation and related practices. The strength of pain, as measured by something like the intensity of it as an attention signal, seems to be roughly the same as before, but despite being equally strong, it feels much less aversive.
To clarify, this is not during some specific weird meditative state, but feels like a general ongoing adjustment even ...
I dreamt that you could donate LessWrong karma to other LW users. LW was also an airport, and a new user had requested donations because to build a new gate at the airport, your post needed to have at least 60 karma and he had a plan to construct a series of them. Some posts had exactly 60 karma, with titles like "Gate 36 done, let's move on to the next one - upvote the Gate 37 post!".
(If you're wondering what the karma donation mechanism was needed for if users could just upvote the posts normally - I don't know.)
Apparently the process of constructing gat...
This paper (Keno Juechems & Christopher Summerfield: Where does value come from? Trends in Cognitive Sciences, 2019) seems interesting from an "understanding human values" perspective.
Abstract: The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open challenges for RL as a model of animal behaviour in natural environments. We ask how the external reward function is designed for biological systems, and how w...
Recent papers relevant to earlier posts in my multiagent sequence:
Understanding the Higher-Order Approach to Consciousness. Richard Brown, Hakwan Lau, Joseph E.LeDoux. Trends in Cognitive Sciences, Volume 23, Issue 9, September 2019, Pages 754-768.
Reviews higher-order theories (HOT) of consciousness and their relation to global workspace theories (GWT) of consciousness, suggesting that HOT and GWT are complementary. Consciousness and the Brain, of course, is a GWT theory; whereas HOT theories suggest that some higher-order representation is (also) necessar...
Hypothesis: basically anyone can attract a cult following online, provided that they
1) are a decent writer or speaker
2) are writing/speaking about something which may or may not be particularly original, but does provide at least some value to people who haven't heard of this kind of stuff before
3) devote a substantial part of their message into confidently talking about how their version of things is the true and correct one, and how everyone who says otherwise is deluded/lying/clueless
There's a lot of demand for the experience of feeling like y...
Some time back, Julia Wise published the results of a survey asking parents what they had expected parenthood to be like and to what extent their experience matched those expectations. I found those results really interesting and have often referred to them in conversation, and they were also useful to me when I was thinking about whether I wanted to have children myself.
However, that survey was based on only 12 people's responses, so I thought it would be valuable to get more data. So I'm replicating Julia's survey, with a few optional quantitative questi...
So I was doing insight meditation and noticing inconsistencies between my experience and my mental models of what things in my experience meant (stuff like "this feeling means that I'm actively and consciously spending effort... but wait, I don't really feel like it's under my control, so that can't be right"), and feeling like parts of my brain were getting confused as a result...
And then I noticed that if I thought of a cognitive science/psychology-influenced theory of what was going on instead, those confused parts of my mi...
Didn't expect to see alignment papers to get cited this way in mainstream psychology papers now.
https://www.sciencedirect.com/science/article/abs/pii/S001002772500071X
...Cognition
Volume 261, August 2025, 106131Loopholes: A window into value alignment and the communication of meaning
Abstract. Intentional misunderstandings take advantage of the ambiguity of language to do what someone said, instead of what they actually wanted. These purposeful misconstruals or loopholes are a familiar facet of fable, law, and everyday life. Engaging with loopholes requires a n
What could plausibly take us from now to AGI within 10 years?
A friend shared the following question on Facebook:
...So, I've seen multiple articles recently by people who seem well-informed that claim that AGI (artificial general intelligence, aka software that can actually think and is creative) in less than 10 years, and I find that baffling, and am wondering if there's anything I'm missing. Sure, modern AI like ChatGPT are impressive - they can do utterly amazing search engine-like things, but they aren't creative at all.
The clearest example of
A morning habit I've had for several weeks now is to put some songs on, then spend 5-10 minutes letting the music move my body as it wishes. (Typically this turns into some form of dancing.)
It's a pretty effective way to get my energy / mood levels up quickly, can recommend.
It's also easy to effectively timebox it if you're busy, "I will dance for exactly two songs" serves as its own timer and is often all I have the energy for before I've had breakfast. (Today Spotify randomized Nightwish's Moondance as the third song and boy I did NOT have the blood suga...
Janina Fisher's book "Healing the Fragmented Selves of Trauma Survivors" has an interesting take on Internal Family Systems. She conceptualizes trauma-related parts (subagents) as being primarily associated with the defensive systems of Fight/Flight/Freeze/Submit/Attach.
Here's how she briefly characterizes the various systems and related behaviors:
I gave this comment a "good facilitation" react but that feels like a slightly noncentral use of it (I associate "good facilitation" more with someone coming in when two other people are already having a conversation). It makes me think that every now and then I've seen comments that help clearly distill some central point in a post, in the way that this comment did, and it might be nice to have a separate react for those.
Huh. I woke up feeling like meditation has caused me to no longer have any painful or traumatic memories: or rather all the same memories are still around, but my mind no longer flinches away from them if something happens to make me recall them.
Currently trying to poke around my mind to see whether I could find any memory that would feel strongly aversive, but at most I can find ones that feel a little bit unpleasant.
Obviously can't yet tell whether some will return to being aversive. But given that this seems to be a result of giving my mind the cha...
And then at some point all the latter people switched to saying "machine learning" instead.