I've found this week's progress pretty upsetting.
I'm fairly scared that if "the EA community" attempted to pivot to running the fire-alarm, that nothing real would happen, we'd spend our chips, and we'd end up in some complicated plot that had no real chance of working whilst maybe giving up our ability to think carefully any more. Like, there's no plan stated in the post. If someone has a plan that has a chance of doing any particular thing that'd be more interesting.
I spend various amounts of time in close proximity to a bunch of parts of "EA leadership", and if you convince me that a strategy will work I could advocate for it.
(Also happy to receive DMs if you want to keep specifics private.)
I've found this week's progress pretty upsetting.
I find it slightly meta-upsetting that we are already measuring progress in weeks.
No disagreements here; I just want to note that if "the EA community" waits too long for such a pivot, at some point AI labs will probably be faced with people from the general population protesting because even now a substantial share of the US population views the AI progress in a very negative light. Even if these protests don't accomplish anything directly, they might indirectly affect any future efforts. For example, an EA-run fire alarm might be compromised a bit because the memetic ground would already be captured. In this case, the concept of "AI risk" would, in the minds of AI researchers, shift from "obscure overconfident hypotheticals of a nerdy philosophy" to "people with different demographics, fewer years of education, and a different political party than us being totally unreasonable over something that we understand far better".
Look at the sidebar here? Is this anywhere near optimal? I don't think so. Surely it should be encouraging people to undertake logical first steps towards becoming involved in alignment (ie. AGI safety fundamentals course, 80,000 hours coaching or booking a call with AI Safety Support).
In a few weeks, I'll probably be spending a few hours setting up a website for AI Safety Australia and NZ (a prospective org to do local movement-building). Lots of people have web development capabilities, but you don't even need that with things like Wordpress.
I've been spending time going through recent threads and encouraging people who've expressed interest in doing something about this, but are unsure what to do, to consider a few logical next steps.
Or maybe just reading about safety and answering questions on the Stampy Wiki (https://stampy.ai)?
Or failing everything else, just do some local EA movement building and make sure to run a few safety events.
I don't know, it just seems like there's low-hanging fruit all over the place. Not claiming these are huge impacts, but beats doing nothing.
I think it is good to do things if you have traction. I think it is good to grow the things you can do.
About 15 years ago, before I'd started professionally studying and doing machine learning research and development, my timeline had most of its probability mass around 60 - 90 years from then. This was based on my neuroscience studies and thinking about how long I thought it would take to build a sufficiently accurate emulation of the human brain to be functional. About 8 years ago, studying machine learning full time, AlphaGo coming out was inspiration for me to carefully rethink my position, and I realized there were a fair number of shortcuts off my longer figure that made sense, and updated to more like 40 - 60 years. About 3 years ago, GPT-2 gave me another reason to rethink with my then fuller understanding. I updated to 15 - 30 years. In the past couple of years, with the repeated success of various explorations of the scaling law, the apparent willingness of the global community to rapidly scale investments in large compute expenditures, and yet further knowledge of the field, I updated to more like 2 - 15 years as having 80% of my probability mass. I'd put most of that in the 6 - 12 year range, but I wouldn't be shocked if things turned out to be easier than expected and s...
The only real answers at this point seem like mass public advocacy
If AI timelines are short, then I wouldn't focus on public advocacy, but the decision-makers. Public opinion changes slowly and succeeding may even interfere with the ability of experts to make decisions.
I would also suggest that someone should focus on within-EA advocacy too (whilst being open about any possible limitations or uncertainties in their understanding).
To clarify, by "public advocacy" I should've said "within-expert advocacy, i.e. AI researchers (not just at AGI-capable organizations)". I'll fix that.
Yeah, I wonder if we could offer these companies funding to take on more AI Safety researchers? Even if they're well-resourced, management probably wants to look financially responsible.
I'm pretty opposed to public outreach to get support for alignment, but the alternative goal of whipping up enough hysteria to destroy the field of AI/the AGI development groups killing us seems much more doable. Reason being from my lifelong experience observing public discourse on topics I have expert knowledge on (e.g. nuclear weapons, China), it seems completely impossible to implant the exact right ideas into the public mind, especially for a complex subject. Once you attract attention to a topic, no matter how much effort you put into presenting the proper arguments, the conversation and people's beliefs inevitably trend toward simple & meme-y/emotionally riveting ideas, instead of the accurate ones. (Looking at the popular discourse on climate change is another good illustration of this.)
But in this case, maybe even if people latch onto misguided fears about Terminator or whatever, as long as they have some sort of intense fear of AI, it can still produce the intended actions. To be clear I'm still very unsure whether such a campaign is a good idea at this point, just a thought.
I think reaching out to governments is a more direct lever: civilians don't have the power to ...
I am deeply worried about the prospect of a botched fire alarm response. In my opinion, the most likely result of a successful fire alarm would not be that society suddenly gets its act together and finds the best way to develop AI safely. Rather, the most likely result is that governments and other institutions implement very hasty and poorly thought-out policy, aimed at signaling that they are doing "everything they can" to prevent AI catastrophe. In practice, this means poorly targeted bans, stigmatization, and a redistribution of power from current researchers to bureaucratic agencies that EAs have no control over.
It should not take long, given these pieces and a moderate amount of iteration, to create an agentic system capable of long-term decision-making
That is, to put it mildly, a pretty strong claim, and one I don't think the rest of your post really justifies. Without which it's still just listing a theoretical thing to worry about
You're completely right. If you don't believe it, this post isn't really trying to update you. This is more to serve as a coordination mechanism for the people who do think the rest isn't very difficult (which I am assuming is a not-small-number).
Note that I also don't think the actions advocated by the post are suboptimal even if you only place 3-7 years at 30% probability.
One probably-silly idea: We could maybe do is some kind of trade. Long-timelines people agree to work on short-timelines people's projects over the next 3 years. Then if the world isn't destroyed, the short-timelines people work for the long-timelines people's projects for the following 15 years. Or something.
My guess is that the details are too fraught to get something like this to work (people will not be willing to give up so much value), but maybe there's a way to get it to work.
As a non-expert, I'm confused about what exactly was so surprising in the works which causes a strong update. "The intersection of many independent, semi-likely events is unlikely" could be one answer, but I'm wondering whether there is more to it. In particular, I'm confused why the data is evidence for a fast take-off in contrast to a slow one.
First, I mistitled the post, and as a result your response is very reasonable. This is less clearly evidence for "fast takeoff" and more clearly evidence for "fast timelines".
In terms of why, the different behaviors captured in the papers constitute a large part of what you'd need to implement something like AlphaGo in a real-world environment. Will stitching them together work immediately? Almost certainly not. Will it work given not-that-much creative iteration, say over 5 years of parallel research? It seems not unlikely, I'd give it >30%.
A fire alarm approach won't work because you would have people like Elon Musk and Mark Zuckerberg saying that we should be developing AI faster than we currently are. What I suggest should happen instead is that the EA community should try to convince a subset of people that AI risks are 80%+ of what we should care about, and if you donate to charity most of it should go to an AI risk organization, and if you have the capacity to directly contribute to reducing AI risk that is what you as a moral person should devote your life to.
I don't think donating to other organizations is meaningful at this point unless those organizations have a way to spend a large amount of capital.
Both Musk and Zuckerberg are convinceable, they're not insane, you just need to find the experts they're anchoring on. Musk in particular definitely already believes the thesis.
Additional money would help as evidence by my son's job search. My 17-year-old son is set to graduate college at age 18 from the University of Massachusetts at Amherst (where we live) majoring in computer science, concentrating in statistics and machine learning. He is looking for a summer internship. He would love to work in AI safety (and through me has known and been interested in the field since a very young age), and while he might end up getting a job in the area, he hasn't yet. In a world where AI safety is well funded, every AI safety organization would be trying to hire him. In case any AI safety organizations are reading this, you can infer his intelligence from him having gotten 5s on the AP Calculus BC and AP Computer Science A exams in 7th grade. I have a PhD in economics from the University of Chicago and a JD from Stanford and my son is significantly more intelligence than I am.
I've heard the story told that Beth Barnes applied to intern at CHAI, but that they told her they didn't have an internship program. She offered to create one and they actually accepted her offer.
I'm setting up AI Safety Australia and New Zealand to do AI safety movement-building (not technical research). We don't properly exist yet (I'm still only on a planning grant), we don't have a website and I don't have funding for an internship program, but if someone were crazy enough to apply anyway, then I'd be happy that they reached out. They'd have to apply for funding so that I can pay them (with guidance).
I'm sure he can find access to better opportunities, but just thought I'd throw this out there anyway as there may be someone who is agenty, but can't access the more prestigious internships.
No one knows how to build an AI system that accomplishes goals, that also is fine with you turning it off. Researchers have been trying for decades, with no success.
Given that it looks like (from your Elaboration) language models will form the cores of future AGIs, and human-like linguistic reasoning will be a big part of how they reason about goals (like in the "Long sequences of robot actions generated by internal dialogue" example) can't we just fine-tune the language model by training it on statements like "If (authorized) humans want to turn me off, I should turn off."
Maybe we can even fine-tune it with statements describing our current moral beliefs/uncertainties and examples of moral/philosophical reasoning, and hope that AGI will learn morality from that, like human children (sometimes) do. Obvious it's very risky to take a black-box approach where we don't really understand what the AI has learned (I would much prefer if we could slow things down enough to work out a white-box approach), but it seems like there's maybe a 20% chance we can just get "lucky" this way?
It's way too late for the kind of top-down capabilities regulation Yudkowsky and Bostrom fantasized about; Earth just doesn't have the global infrastructure. I see no benefit to public alarm--EA already has plenty of funding.
We achieve marginal impact by figuring out concrete prosaic plans for friendly AI and doing outreach to leading AI labs/researchers about them. Make the plans obviously good ideas and they will probably be persuasive. Push for common-knowledge windfall agreements so that upside is shared and race dynamics are minimized.
If you convince the CCP, the US government, and not that many other players that this is really serious, it becomes very difficult to source chips elsewhere.
The CCP and the US government both make their policy decisions based on whatever (a weirdly-sampled subset of) their experts tell them.
Those experts update primarily on their colleagues.
So we just need to get two superpowers who currently feel they are in a zero sum competition with each other to stop trying to advance in an area that gives them a potentially infinite advantage? Seems a very classic case of the kind of coordination problems that are difficult to solve, with high rewards for defecting.
We have, partially managed to do this for nuclear and biological weapons. But only with a massive oversight infrastructure that doesn't exist for AI. And relying on physical evidence and materials control that doesn't exist for AI. It's not impossible, but it would require a similar level of concerted international effort that was used for nuclear weapons. Which took a long time, so possibly doesn't fit with your short timeline
Reward is not creation of uncontrolled AGI. Reward is creation of powerful not-yet-AGI systems which can drastically accelerate technical, scientific or military progress of country.
It's pretty huge potential upside, and consequences of other superpower developing such technology can be catastrophic. So countries have both reward for defecting and risk to lose everything if other country defects.
Yes, such "AI race" is very dangerous. But so was nuclear arms race, and countries still did it.
Who, in practice, pulls the EA-world fire alarm? Is it Holden Karnofsky?
FYI, him having that responsibility would seemingly entail a conflict of interest; he said in an interview:
Anthropic is a new AI lab, and I am excited about it, but I have to temper that or not mislead people because Daniela, my wife, is the president of Anthropic. And that means that we have equity, and so [...] I’m as conflict-of-interest-y as I can be with this organization.
Based on the past week's worth of papers, it seems quite likely that we are now in a fast takeoff, and that we have 2-5 years until Moore's law and organizational prioritization put these systems at AGI.
What makes you say this? What should I read to appreciate how big a deal for AGI the recent papers are?
To be blunt, I don't believe that you have so little bandwidth given the stakes. If timelines are this short, movement strategy has to pivot considerably, and this requires everyone knowing the evidence. Such a writeup could be on the critical path for the entire movement.
Fair enough. Realize this is a bit of an infohazard. Basically, consider the pieces needed to build EfficientZero with language as the latent space, and then ask yourself which of those pieces hasn't been shown to basically work in the last week.
[Before you point out the limitations of EfficientZero: i know. But rather than spelling them out, consider whether you can find any other recent papers that suggest how to solve those problems. Actually giving irresponsible readers a research plan is not a good idea.]
Then you're basically at a dog (minus physical locomotion/control). It is very hard to predict what you will be at if you scale 3 more OOMs, via Moore's law or organizational intent.
You've already posted this, but for the future, I'd suggest checking with the mods first. Once something has been posted, it can't be removed without creating even more attention.
What exactly do you mean by "we are now in a fast takeoff"? (I wouldn't say we're in a fast takeoff until AI systems are substantially accelerating improvement in AI systems, which isn't how I'd characterize the current situation.)
A couple more thoughts on this post which I've spent a lot of today thinking about and discussing with folks:
I strongly disagree that there is a >10% chance of AGI in the next 10 years. I don't have the bandwidth to fully debate the topic here and now, but some key points:
Of the news in the last week, PaLM definitely indicates faster language model progress over the next few years, but I'm skeptical that this will translate to success in the many domains with sparse data. Holden Karnofsky's timelines seem reasonable to me, if a bit shorter than my own:
I estimate that there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).
- My comment EA has unusual beliefs about AI timelines and Ozzie Gooen’s reply
Pulling from those comments, you said:
Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades.
A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.
I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it's not just EAs and the AGI labs being overconfident or out of left field.
I know you said you don't have time to fully debate this. This seemed to be one of the cruxes of your first bullet point though. So if your skepticism about short timelines is driven in a big way by thinking that no credible person outside EA or companies invested in AI think this is plausible, then I am curious what you make of this.
Hey Evan, thanks for the response. You're right that there are circles where short AI timelines are common. My comment was specifically about people I personally know, which is absolutely not the best reference class. Let me point out a few groups with various clusters of timelines.
Artificial intelligence researchers are a group of people who believe in short to medium AI timelines. Katja Grace's 2015 survey of NIPS and ICML researchers provided an aggregate forecast giving a 50% chance of HLMI occurring by 2060 and a 10% chance of it occurring by 2024. (Today, seven years after the survey was conducted, you might want to update against the researchers that predicted HLMI by 2024.) Other surveys of ML researchers have shown similarly short timelines. This seems as good of an authority as any on the topic, and would be one of the better reasons to have relatively short timelines.
What I'll call the EA AI Safety establishment has similar timelines to the above. This would include decision makers at OpenPhil, OpenAI, FHI, FLI, CHAI, ARC, Redwood, Anthropic, Ought, and other researchers and practitioners of AI safety work. As best I can tell, Holden Karnofsky's timelines are...
Katja Grace's 2015 survey of NIPS and ICML researchers provided an aggregate forecast giving a 50% chance of HLMI occurring by 2060 and a 10% chance of it occurring by 2024.
2015 feels decades ago though. That's before GPT-1!
(Today, seven years after the survey was conducted, you might want to update against the researchers that predicted HLMI by 2024.)
I would expect a survey done today to have more researchers predicting 2024. Certainly I'd expect a median before 2060! My layman impression is that things have turned out to be easier to do for big language models, not harder.
The surveys urgently need to be updated.
Note that Metaculus predictions don't seem to have been meaningfully changed in the past few weeks, despite these announcements. Are there other forecasts which could be referenced?
This post is mainly targeted at people capable of forming a strong enough inside view to get them above >30% without requiring a moving average of experts which may take months to update (since it's a popular question).
For everyone else, I don't think you should update much on this except vis a vis the number of other people who agree.
(Crying wolf isn't really a thing here; the societal impact of these capabilities is undeniable and you will not lose credibility even if 3 years from now these systems haven't yet FOOMed, because the big changes will be obvious and you'll have predicted that right.)
This is wrong. Crying wolf is always a thing.
You've declared that you'll turn out "obviously" right about "the big changes", thus justifying whatever alarm-sounding. But saying these innovations will have societal impacts is very different from a 30% chance of catastrophe. Lots of things have "societal impact".
You haven't mentioned any operationalized events to even be able to make it "obvious" whether you were wrong. Whatever happens, in a few years you'll rationalize you were "basically correct" or whatever. You'll have baseball fields worth of wiggle room. Though there are ways you could make this prediction meaningful.
In the vast majority of worlds, nothing catastrophic happens anytime soon. Those are worlds where it's indeed plausible to blow capital or reputation on something that turned out to be not that bad. I.e. "crying wolf" is indeed a thing.
I give <1% chanc...
These sorts of models all seem to be heavily dependent on "borrowing" a ton of intelligence from humans. To me they don't seem likely to be capable of gaining any new skills that humans don't already possess and give lots of demonstrations of. As such they don't really seem to be FOOMy to me.
Also they're literally reliant on human language descriptions of what they're gonna do and why they're gonna do it.
I have some concern that AI risk advocacy might lead someone to believe the "AI is potentially civilization-altering and fast-takeoff is a real possilibity within 10 years" part but not the "alignment is really, really, really hard" part.
I imagine you can see how that might lead to undesirable outcomes.
This post was only a little ahead of it's time. The time is now. EA/LW will probably be eclipsed by wider public campaigning on this if they (the leadership) don't get involved.
I think it's a good conversation to be having. I really don't want to believe we're in a <5 year takeoff timeline, but honestly it doesn't seem that far-fetched.
I'd put this 3-7 year thing at about 10%, maybe a bit less. So obviously with probability around 10%, capabilities researchers should be doing different things (I would love to say "pivoting en masse to safety and alignment research," but we'll see. Since a lot of it would be fake, perhaps that would need to reward/provide outlets for fake safety/alignment research). But EA orgs should still be focusing most attention on longer scales and not going all-in.
Something I personally find convincing and index pretty heavily on are surveys of people in the field. For example, this Bostrom survey seems to be a classic and says:
Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075
I recognize that it is almost 10 years old though. I also have more epistemic respect for Eliezer, and to a somewhat lesser extent MIRI, and so I weigh their stances correspondingly more heavily. It's hard to know how much more heavily I should weigh them ...
Reading the update that you're retracted the fire alarm, I hope that you don't stop thinking about this topic as I think it would be highly valuable for people to think through whether there should be a fire alarm, who would be able to pull it and what actions people should take. Obviously, you should work on this with collaborators and it should be somebody else who activates the fire alarm next time, but I still think you could make valuable contributions towards figuring out how to structure this. I suspect that there should probably be various levels of alert.
I want to strongly endorse making competing claims; this was primarily intended as a coordination outlet for people who updated similarly to me, but that does not preclude principled arguments to the contrary, and I’m grateful to Matt and Tamay for providing some.
This week, the Journal of Moral Theology published a special issue on AI edited by Matthew Gaudet and Brian Patrick Green that is so important that the publishers have made it free to the public. It contributes well thought out insights about the potential implications of the decisions that will quickly roll towards us all. https://jmt.scholasticahq.com/issue/4236
I'd like to push back on this a little.
There are some fairly straightforward limitations on the types of algorithms that can be learned by current deep learning (look at TLM performance on variable-length arithmetic for a clear-cut example of basic functionality that these networks totally fail at) that would severely handicap a would-be superintelligence in any number of ways. There is a reason DeepMind programs MCTS into AlphaZero rather than simply having the network learn its own search algorithm in the weights -- because MCTS is not in the region of a...
Is there any progress on casuality or comprehension or understanding of logic without requiring an enormous amount of compute that makes it seem like solving the problem without actual understanding?
Wanna bet some money that nothing bad will come of any of this on the timescales you are worried about?
Just commenting on the concept of "goals" and particularly the "off switch" problem: no AI system has (to my knowledge) run into this problem, which IMO strongly suggests that "goals" in this sense are not the right way to think about AI systems. AlphaZero in some sense has a goal of winning a Go game, but AlphaZero does not resist being turned off, and I claim its obvious that even a very advanced version of AlphaZero would not resist being turned off. The same is true for large language models (indeed, it's not even clear the idea of turning off a language model is meaningful, since different executions of the model share no state).
Could it be possible to build an AI with no long-term memory? Just make it's structure static. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory, and it probably can't build a new similar AI either (remember, it's still an early AGI, not a God-like Superintelligence yet). If it doesn't remember ...
Quite possibly dumb question: Why couldn’t you just bake into any AI goal “dont copy and paste yourself, dont harm anybody etc” and make that common practice?
Notice that you can't create your feared scenario without "it" and "itself". That is, the AI must not simply be accomplishing a process, but must also have a sense of self - that this process, run this way, is "me", and "I" am accomplishing "my" goals, and so "I" can copy "myself" for safety. No matter how many tasks can be done to a super-human level when the model is executed, the "unboxing" we're all afraid of relies totally on "myself" arising ex nihilo, right? Has that actually changed in any appreciable way with this news? If so, how?
I think people i...
Attaboy! I think half the problem that people have accepting the really obvious arguments for doom is that it just seems such a weird science-fictiony sort of thing to believe. If you can throw a couple of billion at getting attractive musicians and sportspeople to believe it on television you'll probably be able to at least start a scary jihad before the end of the world.
I'm getting really bored of the idea of being killed by nerve-gas emitting plants and then harvested for my atoms, and will start looking forward to being killed by pitchfork-wielding luddites with flaming torches.
[EDIT 4/10/2022: This post was rash and ill-conceived, and did not have clearly defined goals nor meet the vaguely-defined ones. I apologize to everyone on here; you should probably update accordingly about my opinions in the future. In retrospect, I was trying to express an emotion of exasperation related to the recent news I later mention, which I do think has decreased timelines broadly across the ML world.
While I stand by my claims on roughly-human AGI probability, I no longer stand by my statement that "we should pull the fire-alarm". That is unlikely to lead to the calculated concerted effort we need to maximize our odds of successful coordination. Nor is it at all clear, given the timeline mechanism I described here, that AGI built in this way would be able to quickly FOOM, the primary point of concern for such a fire alarm.
I've left the rest of the post here as a record.
]
Based on the past week's worth of papers, it seems very possible (>30%) that we are now in the crunch-time section of a short-timelines world, and that we have 3-7 years until Moore's law and organizational prioritization put these systems at
extremelydangerous levels of capability.[1]The papers I'm thinking about:
It seems altogether possible that it would not take long, given these advances and a moderate amount of iteration, to create an agentic system capable of long-term decision-making.
If you want to think of this as the public miscalibrated Bayesian-updating of one person, you should feel free to do that. If this was a conclusion you reached independently, though, I want to make sure we coordinate.
For those who haven't grappled with what actual advanced AI would mean, especially if many different organizations can achieve it:
If this freaks you out, I'm really sorry. I wish we didn't have to be here. You have permission to listen to everyone else, and not take this that seriously yet. If you're asking yourself "what can I do", there are people who've spent decades coming up with plans, and we should listen to them.
From my vantage point, the only real answers at this point seem like mass
publicwithin-expert advocacy(with as a first step, going through the AI experts who will inevitably be updating on this information)to try and get compute usage restrictions in place, since no one wants anyone else to accidentally deploy an un-airgapped agentic system with no reliable off-switch, even if they think they themselves wouldn't make that mistake.Who, in practice, pulls the EA-world fire alarm? Is it Holden Karnofsky? If so, who does he rely on for evidence, and/or what's stopping those AI alignment-familiar experts from pulling the fire alarm?The EA community getting on board and collectively switching to short-timelines-AI-public-advocacy efforts seems pretty critical in this situation, to provide talent for mass advocacy among AI experts and their adjacent social/professional networks. The faster and more emphatically it occurs, the more of a chance we stand at propagating the signal to ~all major AI labs (including those in the US, UK, and China).Who do we need to convince within EA/EA leadership of this? For those of you reading this, do you rate it as less than 30% that we are currently within a fast takeoff, and if not are you basing that on the experts you'd trust having considered the past week's evidence?(Crying wolf isn't really a thing here; the societal impact of these capabilities is undeniable and you will not lose credibility even if 3 years from now these systems haven't yet FOOMed, because the big changes will be obvious and you'll have predicted that right.)EDIT: if anyone adjacent to such a person wants to discuss why the evidence seems very strong, what needs to be done within the next few weeks/months, please do DM me.Whether this should also be considered "fast takeoff", in the sense of recursive self-improvement, is less clear.However, with human improvement alone it seems quite possible we will get to extremely dangerous systems, with no clear deployment limitations. [This was previously the title of the post; I used the term incorrectly.]