Suppose that your current estimate for possibility of an AI takeoff coming in the next 10 years is some probability x. As technology is constantly becoming more sophisticated, presumably your probability estimate 10 years from now will be some y > x. And 10 years after that, it will be z > y. My question is, does there come a point in the future where, assuming that an AI takeoff has not yet happened in spite of much advanced technology, you begin to revise your estimate downward with each passing year? If so, how many decades (centuries) from now would you expect the inflection point in your estimate?
Sure. But that's isn't so much evidence for intelligence not being a big deal as it is that there might be very few paths of increasing intelligence which are also increasing fitness. Intelligence takes a lot of resources and most life-forms don't exist in nutrition rich and calorie rich environments.
But there is other evidence to support your claim. There are other species that are almost as intelligent as humans (e.g. dolphins and elephants) that have not done much with it. So one might say that the ability to make tools is a useful one also and that humans had better toolmaking appendages. However, even this isn't satisfactory since even separate human populations have remained in close to stasis for hundreds of thousands of years, and the primary hallmarks of civilization such as writing and permanent settlements only arose a handful of times.
I don't think this is relevant to most of Benelliot's point. Upbringing, education, culture, and environment all impact eventual intelligence for humans because we are very malleable creatures. Ben's remark commented on the difference between smart and dumb humans, not the difference between those genetically predisposed to be smarter or dumber (which seems to be what your remark is responding to).