I expect that for most people, starting a new for-profit (or non-profit) AI alignment organization is likely to be net-negative for AI x-risk
While there are some examples of this, such as OpenAI, I still find this claim to be rather bold. If no one was starting AI alignment orgs we would still have roughly the same capabilities today, but only a fraction of the alignment research. Right now, over a hundred times more money is spent on advancing AI compared to reducing risks, so even a company spending half their resources advancing capabilites, and half on AI alignment, that seems net positive to me.
Your concern is justified however, so I will not proceed with any business plan without consulting several experts in the field first.
I would make sure to consult with experts regarding donation strategy/orgs to donate to, but would probably mostly be some of the orgs you mentioned, and perhaps some to the Long Term Future Fund.
I do think you make valid and reasonable points, and I appreciate and commemorate you for that.
Let's use 80000 hours conservative estimate that only around 5B usd is spent on capabilities each year, and 50M on AI alignment. That seems worse than 6B USD spent on capabilities, and 1.05B spent on AI alignment.
A half half approach in this case would 20X the alignment research, but only increase capabilities 20%.
This I agree with, I have some ideas but will consult with experts in the field before pursuing any of them.