by teaching children untrue things
What specifically are you thinking of?
Yeah, I don't think "double counted" is the right term here.
Consider, if just personally like the taste of kale, I'll eat kale. If I also find out that kale is especially healthy, I have additional reason to eat kale, compared to alternatives that are not as healthy.
Surely there's some correlation and causal relationship between what I find tasty and what is healthy. Nutrition (and avoiding poisoning) is the main reason taste evolved!
But that doesn't mean that taste and health aren't separate reasons for me to prefer to eat kale.
Yeah, this talk for instance.
An additional part of the story is that if a superintelligence want to compute the distribution of outcomes of an intelligent civilization doing a singularity, it will want to sample histories leading to the singularity. But these histories have a branching tree structure.
Suppose you've already simulated the evolution of life and humanity up to the singularity, and you're interested in other nearby ways it could have played out. You could simulate the whole thing again, but that's probably unnecessary. Computationally cheaper, would be to cache the simulation, go back to a specific branching point and restart the simulation (with some trivial noise) from that point. eg you only need to compute the evolution of life and humanity up to WWI, once, in order to simulate all of the possible historical trajectories that could have followed from WWI.
The more time passes, the more branching points there are, and the more branches. (Depending on the parameters), it's an exponential explosion. It seems like almost all of the total simulation time is spent simulating the period in the decades leading up to the singularity.
I think I had mentally cached their work on psychology as attempting to be general enough to apply to AIs too, but I don’t know if that’s accurate.
I think that some of them thought it was relevant to AI (eg at least one person was concerned about docs about Connection Theory being on the internet because it might yield AI capability advances), but that this was largely because they were extremely uninformed about what modern ML is.
This is a good statement of an important counterpoint. Thanks for writing it.
(I'll leave for another comment/post the question of what went wrong in my generation. The "types of arguments" I objected to above all seem quite EA-flavored, and so one salient possibility is just that the increasing prominence of EA steered my generation away from the type of mentality in which it's even possible to aim towards scientific breakthroughs. But even if that's one part of the story, I expect it's more complicated than that.)
I'm reminded of Patrick Collison's (I now think, quite wise) comment on EA:
Now if the question is, should everyone be an EA or even, I guess in the individual sense, am I or do I think I should be an EA? I think – and obviously there's kind of heterogeneity within the field – but my general sense is that the EA movement is always very focused on kind of rigid, not rigid, that's that's unfair perhaps, but on sort of, estimation, analytical, quantification, and sort of utilitarian calculation, and I think that that as a practical matter that means that you end up too focused on that which you can measure, which again means – or as a practical matter means – you're too focus on things that are sort of short-term like bed nets or deworming or whatever being obvious examples. And are those good causes? I would say almost definitely yes, obviously. Now we've seen some new data over the last couple of years that maybe they’re not as good as they initially seemed but they're very likely to be really good things to do.
But it's hard for me to see how, you know, writing a treatise of human nature would score really highly in an EA oriented framework. As assessed ex-post that looked like a really valuable thing for Hume to do. And similarly, as we have a look at the things that in hindsight seem like very good things to have happen in the world, it's often unclear to me how an EA oriented intuition might have caused somebody to do so. And so I guess I think of EA as sort of like a metal detector, and they've invented a new kind of metal detector that's really good at detecting some metals that other detectors are not very good at detecting. But I actually think we need some diversity in the different metallic substances which our detectors are attuned to, and for me EA would not be the only one.
Kind of a tangent:
This is all related to something Buck recently wrote: "I spend most of my time thinking about relatively cheap interventions that AI companies could implement to reduce risk assuming a low budget, and about how to cause AI companies to marginally increase that budget". I'm sure Buck has thought a lot about his strategy here, and I'm sure that you've thought a lot about your strategy as laid out in this post, and so on. But a part of me is sitting here thinking: man, everyone sure seems to have given up. (And yes, I know it doesn't feel like giving up from the inside, but from my perspective that's part of the problem.)
Thanks for pointing this out.
I've been thinking lately about how much folks around more or less dismiss the idea of an AI pause as unrealistic because we're not going to get that much political buyin.
I (speculatively) think that this is a bit trapped in a mindset that is assuming the conclusion. Big political changes like that one have happened in the past, and they have often seemed impossible before they happened and inevitable in retrospect. And, when something big like that changes, part of the process is a cascade, where whole deferral structures change their mind / attitude / preferences, about something. How much buyin you have before that cascade happens may not be very indicative of where that cascade can end up.
I, personally, don't feel like I know how to "call it" when big changes are on the table or when they're not. But it sure does seem like people are counting us out much too early, given the fundamentals of the situation. We all think that the world is going to change very radically in the next few years. It's not clear what kinds of cascades are on the table.
I provisionally think that we should feel less bashful about advocating for an AI pause, and more agnostic about how likely that is to come to pass.
lol, it looks like I should have finished reading the sentence I'm responding to, before starting to write a comment, since you're making a similar point, re legibility.
I'm still interested in who you think is making progress though, even if illegibly to most people.
Conversely, when I encounter people who seem to me to have meaningfully pushed one of these frontiers
Can you give some examples of what you're thinking about here?
There have been notable advances in cognitive science over the past 20 years, and those are pretty cool.
But outside of that, I'm not sure what progress you're thinking about. There are definitely skilled practitioners of psychological arts re: emotional processing, but the domain is esoteric in the sense that their skill doesn't translate to third-person legible models that can be built on. It's not clear to me if the most skilled psychological practitioners of 2025 are more or less skilled than the most skilled of 2010 or 1995. And insofar as we know more about what's going on in that domain, it seems like its from more-or-less academic cognitive science work (stuff like Predictive Processing).
Politics seems similar. I'm not even sure what you're counting as "pushing the frontier" in our understanding of politics. Do you mean concepts like preference falsification and Naunihal Singh's models of coups?
Hyperlink