A fun illustration of survivorship/selection bias is that nearly every time I find myself reading an older paper, I find it insightful, cogent, and clearly written.
Selection bias isn't the whole story. The median paper in almost every field is notably worse than it was in, say, 1985. Academia is less selective than it used to be—in the U.S., there are more PhDs per capita, and the average IQ/test scores/whatever metric has dropped for every level of educational attainment.
Grab a journal that's been around for a long time, read a few old papers and a few new papers at random, and you'll notice the difference.
The primary optimization target for LLM companies/engineers seems to be making them seem smart to humans, particularly the nerds who seem prone to using them frequently. A lot of money and talent is being spent on this. It seems reasonable to expect that they are less smart than they seem to you, particularly if you are in the target category. This is a type of Goodharting.
In fact, I am beginning to suspect that they aren't really good for anything except seeming smart, and most rationalists have totally fallen for it, for example Zvi insisting that anyone who is not using LLMs to multiply their productivity is not serious (this is a vibe not a direct quote but I think it's a fair representation of his writing over the last year). If I had to guess, LLMs have 0.99x'ed my productivity by occasionally convincing me to try to use them which is not quite paid for by very rarely fixing a bug in my code. The number is close to 1x because I don't use them much, not because they're almost useful. Lots of other people seem to have much worse ratios because LLMs act as a superstimulus for them (not primarily a productivity tool).
Certainly this is an impressive technology, surpris...
Mathematics students are often annoyed that they have to worry about "bizarre or unnatural" counterexamples when proving things. For instance, differentiable functions without continuous derivative are pretty weird. Particularly engineers tend to protest that these things will never occur in practice, because they don't show up physically. But these adversarial examples show up constantly in the practice of mathematics - when I am trying to prove (or calculate) something difficult, I will try to cram the situation into a shape that fits one of the theorems in my toolbox, and if those tools don't naturally apply I'll construct all kinds of bizarre situations along the way while changing perspective. In other words, bizarre adversarial examples are common in intermediate calculations - that's why you can't just safely forget about them when proving theorems. Your logic has to be totally sound as a matter of abstraction or interface design - otherwise someone will misuse it.
Particularly after my last post, I think my lesswrong writing has had bit too high of a confidence / effort ratio. Possibly I just know the norms of this site well enough lately that I don't feel as much pressure to write carefully. I think I'll limit my posting rate a bit while I figure this out.
LW doesn't punish, it upvotes-if-interesting and then silently judges.
confidence / effort ratio
(Effort is not a measure of value, it's a measure of cost.)
Perhaps LLM's are starting to approach the intelligence of today's average human: capable of only limited original thought, unable to select and autonomously pursue a nontrivial coherent goal across time, learned almost everything they know from reading the internet ;)
This doesn't seem to be reflected in the general opinion here, but it seems to me that LLM's are plateauing and possibly have already plateaued a year or so ago. Scores on various metrics continue to go up, but this tends to provide weak evidence because they're heavily gained and sometimes leak into the training data. Still, those numbers overall would tend to update me towards short timelines, even with their unreliability taken into account - however, this is outweighed by my personal experience with LLM's. I just don't find them useful for practically ...
Huh o1 and the latest Claude were quite huge advances to me. Basically within the last year LLMs for coding went to "occasionally helpful, maybe like a 5-10% productivity improvement" to "my job now is basically to instruct LLMs to do things, depending on the task a 30% to 2x productivity improvement".
@Thomas Kwa will we see task length evaluations for Claude Opus 4 soon?
Anthropic reports that Claude can work on software engineering tasks coherently for hours, but it’s not clear if this means it can actually perform tasks that would take a human hours. I am slightly suspicious because they reported that Claude was making better use of memory on Pokémon, but this did not actually cash out as improved play. This seems like a fairly decisive test of my prediction that task lengths would stagnate at this point; if it does succeed at hours long tasks, I will...
I don't run the evaluations but probably we will; no timeframe yet though as we would need to do elicitation first. Claude's SWE-bench Verified scores suggest that it will be above 2 hours on the METR task set; the benchmarks are pretty similar apart from their different time annotations.
Sure, but trends like this only say anything meaningful across multiple years, any one datapoint adds almost no signal, in either direction. This is what makes scaling laws much more predictive, even as they are predicting the wrong things. So far there are no published scaling laws for RLVR, the literature is still developing a non-terrible stable recipe for the first few thousand training steps.
It looks like Gemini is self-improving in a meaningful sense:
https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
Some quick thoughts:
This has been going on for months; on the bullish side (for ai progress, not human survival) this means some form of self-improvement is well behind the capability frontier. On the bearish side, we may not expect a further speed up on the log scale (since it’s already factored in to some calculations).
I did not expect this degree of progress so soon; I am now much ...
Unfortunate consequence of sycophantic ~intelligent chatbots: everyone can get their theories parroted back to them and validated. Particularly risky for AGI, where the chatbot can even pretend to be running your cognitive architecture. Want to build a neuro-quantum-symbolic-emergent-consciousness-strange-loop AGI? Why bother, when you can just put that all in a prompt!
A lot of new user submissions these days to LW are clearly some poor person who was sycophantically encouraged by an AI to post their crazy theory of cognition or consciousness or recursion or social coordination on LessWrong after telling them their ideas are great. When we send them moderation messages we frequently get LLM-co-written responses, and sometimes they send us quotes from an AI that has evaluated their research as promising and high-quality as proof that they are not a crackpot.
Basic sanity check: We can align human children, but can we align any other animals? NOT to the extent that we would trust them with arbitrary amounts of power, since they obviously aren't smart enough for this question to make much sense. Just like, are there other animals that we've made care about us at least "a little bit?" Can dogs be "well trained" in a way where they actually form bonds with humans and will go to obvious personal risk to protect us, or not eat us even if they're really hungry and clearly could? How about species further on the evolutionary tree like hunting falcons? Where specifically is the line?
Sometimes I wonder if people who obsess over the "paradox of free will" are having some "universal human experience" that I am missing out on. It has never seemed intuitively paradoxical to me, and all of the arguments about it seem either obvious or totally alien. Learning more about agency has illuminated some of the structure of decision making for me, but hasn't really effected this (apparently) fundamental inferential gap. Do some people really have this overwhelming gut feeling of free will that makes it repulsive to accept a lawful universe?
I used to, as a child. I did accept a lawful universe, but I thought my perception of free will was in tension with that, so that perception must be "an illusion".
My mother kept trying to explain to me that there was no tension between these things, because it was correct that my mind made its own decisions rather than some outside force. I didn't understand what she was saying though. I thought she was just redefining 'free will' from a claim that human brains effectively had a magical ability to spontaneously ignore the laws of physics to a boring tautological claim that human decisions are made by humans rather than something else.
I changed my mind on this as a teenager. I don't quite remember how, it might have been the sequences or HPMOR again. I realised that my imagination had still been partially conceptualising the "laws of physics" as some sort of outside force, a set of strings pulling my atoms around, rather than as a predictive description of me and the universe. Saying "the laws of physics make my decisions, not me" made about as much sense as saying "my fingers didn't move, my hand did." That was what my mother had been trying to tell me.
To what extent would a proof about AIXI’s behavior be normative advice?
Though AIXI itself is not computable, we can prove some properties of the agent - unfortunately, there are fairly few examples because of the “bad universal priors” barrier discovered by Jan Leike. In the sequential case we only know things like e.g. it will not indefinitely keep trying an action that yields minimal reward, though we can say more when the horizon is 1 (which reduces to the predictive case in a sense). And there are lots of interesting results about the behavior of Solom...
Can AI X-risk be effectively communicated by analogy to climate change? That is, the threat isn’t manifesting itself clearly yet, but experts tell us it will if we continue along the current path.
Though there are various disanalogies, this specific comparison seems both honest and likely to be persuasive to the left?
Most ordinary people don't know that no one understands how neural networks work (or even that modern "Generative A.I." is based on neural networks). This might be an underrated message since the inferential distance here is surprisingly high.
It's hard to explain the more sophisticated models that we often use to argue that human dis-empowerment is the default outcome but perhaps much better leveraged to explain these three points:
1) No one knows how A.I models / LLMs / neural nets work (with some explanation of how this is conceptually possibl...
"Optimization power" is not a scalar multiplying the "objective" vector. There are different types. It's not enough to say that evolution has had longer to optimize things but humans are now "better" optimizers: Evolution invented birds and humans invented planes, evolution invented mitochondria and humans invented batteries. In no case is one really better than the other - they're radically different sorts of things.
Evolution optimizes things in a massively parallel way, so that they're robustly good at lots of different selectively relevant things ...
I guess Dwarkesh believes ~everything I do about LLMs and still think we probably get AGI by 2032:
This is not the kind of news I would have expected from short timeline worlds in 2023: https://www.techradar.com/computing/artificial-intelligence/chatgpt-is-getting-smarter-but-its-hallucinations-are-spiraling
I still don't think that a bunch of free-associating inner monologues talking to each other gives you AGI, and it still seems to be an open question whether adding RL on top just works.
The "hallucinations" of the latest reasoning models look more like capability failures than alignment failures to me, and I think this points towards "no." But my credences are very unstable; if METR task length projections hold up or the next reasoning model easily zero-shots Pokemon I will just about convert.
GDM has a new model: https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/#advanced-coding
At a glance, it is (pretty convincingly) the smartest model overall. But progress still looks incremental, and I continue to be unconvinced that this paradigm scales to AGI. If so, the takeoff is surprisingly slow.
Back-of-the-envelope math indicates that an ordinary NPC in our world needs to double their power like 20 times over to become a PC. That’s a tough ask. I guess the lesson is either give up or go all in.
That moment when you want to be updateless about risk but updateful about ignorance, but the basis of your epistemology is to dissolve the distinction between risk and ignorance.
(Kind of inspired by @Diffractor)
Did a podcast interview with Ayush Prakash on the AIXI model (and modern AI), very introductory/non-technical:
Gary Kasparov would beat me at chess in some way I can't predict in advance. However, if the game starts with half his pieces removed from the board, I will beat him by playing very carefully. The first above-human level A.G.I. seems overwhelmingly likely to be down a lot of material - massively outnumbered, running on our infrastructure, starting with access to pretty crap/low bandwidth actuators in the physical world and no legal protections (yes, this actually matters when you're not as smart as ALL of humanity - it's a disadvantage relative to even the...
I suspect that human minds are vast (more like little worlds of our own than clockwork baubles) and even a superintelligence would have trouble predicting our outputs accurately from (even quite) a few conversations (without direct microscopic access) as a matter of sample complexity.
Considering the standard rhetoric about boxed A.I.'s, this might have belonged in my list of heresies: https://www.lesswrong.com/posts/kzqZ5FJLfrpasiWNt/heresies-in-the-shadow-of-the-sequences
I'm starting a google group for anyone who wants to see occasional updates on my Sherlockian Abduction Master List. It occurred to me that anyone interested in the project would currently have to check the list to see any new observational cues (infrequently) added - also some people outside of lesswrong are interested.
In MTG terms, I think Mountainhead is the clearest example I’ve seen of a mono-blue dystopia.
I seem to recall EY once claiming that insofar as any learning method works, it is for Bayesian reasons. It just occurred to me that even after studying various representation and complete class theorems I am not sure how this claim can be justified - certainly one can construct working predictors for many problems that are far from explicitly Bayesian. What might he have had in mind?
A "Christmas edition" of the new book on AIXI is freely available in pdf form at http://www.hutter1.net/publ/uaibook2.pdf
I wonder if it’s true that around the age of 30 women typically start to find babies cute and consequently want children, and if so is this cultural or evolutionary? It’s sort of against my (mesoptimization) intuitions for evolution to act on such high-level planning (it seems that finding babies cute can only lead to reproductive behavior through pretty conscious intermediary planning stages). Relatedly, I wonder if men typically have a basic urge to father children, beyond immediate sexual attraction?
I don't think so as I had success explaining away the paradox with concept of "different levels of detail" - saying that free will is a very high-level concept and further observations reveal a lower-level view, calling upon analogy with algorithmic programming's segment tree.
(Segment tree is a data structure that replaces an array, allowing to modify its values and compute a given function over all array elements efficiently. It is based on tree of nodes, each of those representing a certain subarray; each position is therefore handled by several - specifically, O(logn) nodes.)