Review
Notes from our experiment in more involved moderation:
I gave this a quick skim. I had a hard time figuring out what the point was. My current guess is that I'll want to move towards a policy where new users writing posts are required to be much more to-the-point about what claims their post is making and who the target audience is. Could you write a 3-sentence summary of what you're going for here?
The overall post doesn't obviously meet the quality bar of saying something new. At first glance seems to mostly be retreading some old discussion on how impactful AI will be.
Imagination is twitterpated by rapid progress because the incline of growth insinuates parabolic continuation. And on April 4, 2023 as I write this, nowhere is there a bigger blood rush of the minds than in AI. Will AI become sentient and wreak James Cameronian demise? Will AI untether us from our earthly bounds and either by way of Mars or Metaverse, rapidly evolve humanity and its dominion? Is death canceled by ChatGPT17? What are the moral implications of turning a sentient AI off? Should they vote? While these are fabulously interesting conversations to have, and truly without the air of patronization that comes with any sentence that begins with the word “while”, they miss the actual point.
What if AI works? That’s it.
What if AI works?
Consider this: Bloomberg is launching BloombergGPT. The essence of BloombergGPT is the digestion of the massive, specific dataset that Bloomberg has access to (specific datasets are ones that are not publicly available as opposed to ChatGPT which principally uses the publicly available internet) revealing greater truths of investing…and then being able to make decisions autonomously.
So let’s say that works. Let’s say that BloombergGPT is a good investor. All a client would have to do then is buy the subscription and it will not only pay for itself (the favorite sentence of so many a hamhock salesman) but it will function outside the bounds of normal human investing. It won’t have to spend the time looking up P/E and comparing to other like priced competitors, or checking why the pivot table is all farkakteh – it’ll…just…do…it. And it won’t stop doing it. And it will do it well. It will be working well 24/7. Or what if it’s not just a pretty good investor, what if it’s a great investor? Or the best investor?
What if it works?
So huzzah! Bloomberg is stoked because they’ve fulfilled their function and brought something monumental to market. They should be excited! And immediately, the few companies that are able to meet the capital costs are met with fabulous profit. The effects here on a few specific companies could be as quick as they are profound meaning that in a very short period of time, there is a pretty intense consolidation of wealth.
Or look at sales and marketing overall. The eminently shadowy world of martech has long since been enamored with the idea of automated appointment generation - as well it should be! If you don’t have to pay an SDR to make the dials and you still not only get their appointments, but appointments being generated by a machine that is calling thousands of people at the same time…well…you essentially crack the code. Cost of Acquisition, or CAC, is one of the primary measures of any business. Let's look at some basic math.
Generally speaking, there are 4 large sets of variables that play into CAC:
Let's just look at number 2 there. There are 2 direct sales people in the vast majority of sales cycles: the SDR (the person who sets the appointment) and the AE (the person who closes the deal.) Based on my direct experience in 17 years of B2B and D2C tech sales and marketing, an average SDR can drive about 10 demos that occur every week. They may be able to book about 20 and half or so occur, again, there are huge numbers of specific variables per different business but through the spooky throughlines of life, it actually does tend to settle around 10 demos per week per reasonable SDR. The underpinning of that success is a reasonable number of outbound activities (calls and emails principally) into a pretty well selected list of targets, again, these are major organizational deltas here but those deltas tend to be a result of specific talent rather than industry. A reasonable target for an SDR is 80 calls M-Thur (Fridays usually suck for SDRs) resulting in ~3.5-4.5 hours actually talking with reasonable targets M-Thurs. Let's call this 16 hours of Talk Time per week per SDR who average around ~$80k once trained up (that usually takes another month in addition to sales training time.) I line out the SDR side more because the AE side is what most laypeople consider to be "sales people" meaning that AEs have a salary and get a commission based on wins. So here is the basic equation then for overall sales cycle:
$150+$300 = $450. Yay! But wait...sometimes I lose deals too. That's right, my losses need to factor into my CAC....and I lose 60% of the time (at best)
So how does AI play out there? Not only would you potentially be bringing the cost of the SDR generation way down...you'd be impacting a factor that is the sneaky king of CAC...AE utilization.
If AEs are "taking" 10 demos a week...what are they doing the rest of the week? Prospecting? Working their follow ups? Almost never. But if you can keep them busy by moving from a human gated SDR system to an always-on SDR one, the AE base salary remains consistent and your only scaling cost is hard tethered to new revenue (meaning that its super scalable.)
Huuuuuuuuge victory! And all that model represents is getting the appointment, none of this even begins to discuss the potential outcome of AI actual closing deals as well.
So how does it play out then? This all sounds like fabulous success. Well it is…other than for the fact that we aren’t economically structured around a world where a few people can win that profoundly and quickly. We want the greater bend of history to point in the direction of the invisible hand, not the gilded infinity gauntlet of pluto/autocrats who have money because their parents had money because their parents had money because etc etc. Consolidation of wealth in and of itself creates inflation (see the last year or so) and usually finds a way to play a role in some general destabilization (also see the last year or so) but that’s all at the currently measurable macro level. We don’t really know how to measure the effects of things that win their way into human adoption with a great quickness. Things like overstimulation by way of advertising (although the coincidental rise in ADHD sure bears some interesting correlation) or the dissociative reality of going to school when all of information ever is both available on a device you are both, for some reason, not allowed to use and you probably know how to employ better than the person being underpaid to teach/babysit you….we don’t have access to generations of societal iteration of the generally the least wrong way to deal with these things.
I even believe that proper regulation will catch on too! It’ll have to. Whether that regulation is purported on purpose by governmental entities or naturally as a mechanic of the world itself, shit will be sorted out. The problem is that if AI works, the victors are going to receive inordinate spoils so quickly that it will potentially warp society around its economic spine. We are moving towards AI Scoliosis because we just never fathomed the real effects of this kind of tool.
And that’s really what AI is. A tool.
A big new fancy hammer. And while the questions around what happens if the hammer begins to wield itself are fascinating opportunities for mental exercise…the real question isn’t “Will the AI that perfects paperclip manufacturing destroy humanity in heaps of its untethered godlike powers of bent steel generation” at least that won’t be the question that the next crop of ill equipped septuagenarian monuments to plastic surgery will fail to address in the 2024/2028 political cycle.
No
The real question is:
What if AI works?
Just.
Works.