(1) I agree, but don't have confidence that this alternate approach results in faster progress. I hope I'm proven wrong.
(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.
(5) Fair enough.
Liron: Carl Feynman. What is your P(Doom)?
Carl: 43%.
Comments like this always remind me of the Tetlock result that forecasters who report probability estimates using more-precise, less-round numbers do in fact outperform others, and are more correctly incorporating the sources of information available.
I'm curious if you have an opinion on the relatives contributions of different causes, such as:
I'm thinking (as an example) of Newton, who used calculus to get his results but translated the results out of calculus in order to publish. This let other people see the results were right, but not how anyone could have come up with them. Without that known physics payoff communicated through inadequate tools, there wouldn't have been enough impetus (pun intended) to push the relevant community of people to learn calculus.
It's kind of fun to picture AI agents working during the day and resting at night. Maybe that's the true AGI moment.
In context, this will depend on the relative costs of GPUs and energy storage, or the relative value of AI vs other uses of electricity that can be time-shifted. I would happily run my dryer or dishwasher during the daytime instead of at night in order to get paid to let OpenAI deliver a few million extra tokens. Liberalizing current electricity market participation and the ability to provide ancillary services has a lot of unrealized potential. If you have AGI, and an actually-well-functioning electricity market, it's not likely to be the AI that has to shut down first when solar production decreases.
This would be great to have, for sure, and I wish you luck in working on it!
I wonder if, for the specific types of discussions you point to in the first paragraph, it's necessary or even likely to help? Even if all the benchmarks today are 'bad' as described, they measure something, and there's a clear pattern of rapid saturation as new benchmarks are created. METR and many others have discussed this a lot. There have been papers on it. It seems like the meta-level approach of mapping out saturation timelines should be sufficient to convince people that for any given capability they can define, if they make a benchmark for it, AI will acquire that capability at the level the benchmark can measure. In practice, what follows is usually some combination of pretending it didn't happen, or else denying the result means anything and moving the goalposts. For a lot of people I end up in those kinds of discussions with, I don't think much would help beyond literally seeing AI put them and millions of others permanently out of work, and even then I'm not sure.
Upvoted - I do think lack of a coherent, actionable strategy that actually achieves goals if successful is a general problem of many advocacy movements, not just AI. A few observations:
(1) Actually-successful historical advocacy movements that solved major problems usually did so incrementally over many iterations, taking the wins they could get at each moment while putting themselves in position to take advantage when further opportunities arose.
(2) Relatedly, don't complain about incremental improvements (yours or others'). Celebrate them, or no one will want to work with you or compromise with you, and you won't end up in position to get more wins later.
(3) Raising awareness isn't a terminal goal or a solution, but it gives others a reason to pay attention to you at all. If you have actually good proposals for what to do about a problem, and are in a position to make the case that your proposals are effective and practical, then a perception that the problem is real and a solution is necessary can be very helpful. If a politician solves a major problem that is not yet a crisis, or is not seen as a crisis by their constituents, then solving the problem just looks like wasting money/time/effort to the people that decide if they get to keep their jobs.
(4) Don't plan a path that leads to victory, plan so that all paths lead to victory. If you make a plan, any plan, to achieve an outcome that is sufficient, it will require many things to go right, and therefore will not work, for reasons you fail to anticipate, and it will also taint your further planning efforts along predetermined directions, limiting your ability to adapt to future opportunities and setbacks. Avoiding this failure mode is part of the upshot of seeking and celebrating incremental wins unreservedly and consistently, as long as those wins don't cut off the path to further progress.
(5) Being seen to have a long-term plan that no one currently in power would support seems like a quick way to get shut out of a conversation unless you already have some form of power such that you're hard to ignore.
I was so glad the other day to see Nate Soares talk about the importance of openly discussing x-risks, and also the recent congressional hearings that actually started to ask about real AI risks, because it's an opening to push the conversation in useful directions. I genuinely worry that AI safety orgs and advocates will make the mistakes that e.g. climate change activists often make: Shut down proposals that are clearly net improvements likely to increase public support for further action, in favor of (in practice) counterproductively maintaining the status quo and turning people off. I started openly discussing x-risk with more and more people in my life last year, and found they have been quite receptive to it from people they knew and trusted to generally be reasonable.
I do think there is value in having organizations around with the kinds of plans you are discussing, but I don't think, in general, those are the ones that actually get the opportunity to make big wins. I think they serve as generators of ideas that get filtered through more incremental and 'moderate' organizations over time, and make those other organizations seem like better partners to collaborate with. I don't have good data for this, more a general intuition from looking at a few historical examples.
I can't tell which answer to this question is meant to be 'for' or 'against' the OP's point, but it sounds like the latter. Even if it's the case the neurons contain something useful nutritionally (and I'd be surprised but not too surprised if it were), consider that these shellfish have neurons, and unlike other meats, we actually eat the neurons instead of them being part of organs we remove before eating. Also, that we have very good reason to avoid eating the neural tissues of mammals.
I will definitely be checking out those books, thanks, and your response clarified the intent a lot for me.
As for where new metaphors/mechanisms come from, and whether they're ever created out of nothing, I think that that is very very rare, probably even rarer than it seems. I have half-joked with many people that at some level there are only a few fundamental thoughts humans are capable of having, and the rest is composition (yes, this is metaphorically coming from the idea of computers with small instruction sets). But more seriously, I think it's mostly metaphors built on other metaphors, all the way down.
I have no idea how Faraday actually came up with the idea of force lines, but it looks like that happened a couple decades after the first known use of isotherms, and a few more decades after the first known use of contour lines, with some similar examples dating back to the 1500s. The early examples I can quickly find were mostly about isobaths, mapping the depth of water for navigation starting in the Age of Exploration. Plus, there's at least one use of isogons, lines of equal magnetic inclination, also for navigation. AFAICT Faraday added the idea of direction to such lines, long before anyone else formalized the idea of vectors. But I can still convince myself, if I want, that it is a metaphor building on a previous well-known metaphor.
If I had to guess a metaphor for Newton, yes I think clockwork is part of it, but mathematically I'd say it's partly that the laws of nature are written in the language of geometry. Not just the laws of motion, but also ray optics.