It seems that they consider a soft takeoff more likely than a hard takeoff, which is still compatible with understanding the concept of an intelligence explosion.
Yeah the best argument I can think of for this course is something like: soft takeoff is more likely, and even if hard takeoff is a possibility, preparing for hard takeoff is so terrifically difficult that it doesn't make sense to even try. So let's optimize for the scenario where soft takeoff is what happens.
From their site:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
The money quote is at the end, literally—$1B in committed funding from some of the usual suspects.