I don't want to accelerate an arms race, and paying for access to GPT seems like a perfect way to be a raindrop in a dangerous flood. My current idea is to pay an equal amount monthly to Miri. I'll view it as the price being $40 per month with half going to AI safety research.
Is this indefensible? Let me know. GPT-4 is very useful to me personally and professionally, and familiarity of language models will also be useful if I have enough time to transition into an AI safety career, which I am strongly considering.
If it is a good idea, should we promote the offsetting strategy among people who are similarly conflicted?
I don't think OpenAI is funding-constrained in any real way at the moment, and using new AI systems for mundane utility seems pretty harmless (more from Zvi).
This is somewhat galaxy-brained thinking, but if GPT-4 generates enough revenue, perhaps it actually steers OpenAI execs towards slowing down? "If GPT-4 is already generating $X billion dollars on its own, why risk hundreds of millions or billions of dollars more, and a potential safety disaster or PR crisis, to train GPT-5 ASAP?"
Or, even more galaxy-brained, if enough people pay for ChatGPT+ to get mundane utility out of the chatbot, OpenAI will be capacity-constrained, possibly forcing them to raise prices (or at least delay lowering them) and price out some capabilities research that requires API use at scale.
Realistically though, I think the impact of paying for ChatGPT+ is minimal in either direction, even if everyone in your reference class also pays for it.