I like this dichotomy. I've been saying for a bit that I don't think "companies that only commercialise existing models and don't do anything that pushes forward the frontier" aren't meaningfully increasing x-risk. This is a long and unwieldy statement - I prefer "AI product companies" as a shorthand.
For a concrete example, I think that working on AI capabilities as an upskilling method for alignment is a bad idea, but working on AI products as an upskilling method for alignment would be fine.
It's a great time to start an AI company. You can create a product in days that would have been unbelievable just two years ago. Thanks to AI, there will soon be a one-person billion-dollar company.
It's a terrible time to start an AI company. The singularity is fast approaching, and any talent that enters the field should instead go work in AI alignment or policy. Living a monastic lifestyle is better than being dead.
But wait! My plan is to just build software that takes helps people count calories by taking pictures of their meals. How does counting calories help bring about the end of the world?
In the past, any company worthy of being called an AI company would be training models to do things AI had never done before. But now it just means using the same pre-built AI models, programming it in English. Calling a calorie-counting app an "AI company" and worrying about its affect on AI timelines sounds a lot like worrying about Korean car manufacturers becoming dominant because there are an increasing number of Korean "car companies" that deliver cookies.
We need to distinguish between AI capabilities companies, which actually advance the state of the art in AI, vs. AI product companies, which merely use it.
So, my two questions:
Simple economics does tell us that, all else being equal, the more AI product companies there are, the better AI capabilities will be. But, at an individual level, should someone considering starting an AI product company shy away from it?
I don't know. Here are some arguments for and against:
Building AI Products is Unethical
Counterargument: Many believe LLMs will become commoditized. Then there will be so many vendors that none will make an economic profit. There are likely to soon be competitive leaked or open source LLMs. You will be able to run these yourself with very little value flowing into AI capabilities creators.
Counterargument: If you don't compete and leave your would-be competitor a monopoly, would it be any different? Monopolies can be great at investing in technology; just see Bell Labs.
Counterargument: That kind of dynamic takes years to play out. Word doesn't spread that quickly, and people don't change their plans overnight. More likely that person builds a calorie counting app anyway, loses to you after 6 months, and then goes back to paying the bills as an ordinary software engineer. Or as an AI alignment researcher.
Counter-counterargument: Base rates suggest that, if someone loses their first option of what to work on, AI capabilities will be a more common second choice than AI alignment. And as long as AI capabilities work pays more, this will be especially true among the entrepreneurial.
Building AI Products is Not Unethical
Counterargument: Power and money are addictive. OpenAI, DeepMind, and Anthropic were all founded by people who care deeply about AI safety. Look how that's turned out.
Counterargument: There may not be time to become rich and then fund AI safety work before the world ends.
Counterargument: Are you really going to only sell your product to people on the right side of the AI race? Will your investors let you?