I recently came across this post by Kevin Lacker about narrow AI risks. TLDR: There are reasonable routes to apocalyptic AI far before we achieve AGI.
My default position has been extremely skeptical on the risks of AGI. I am generally in agreement with the Andrew Ng quote that "I don’t work on preventing AI from turning evil for the same reason that I don’t work on the problem of overpopulation on the planet Mars." I am still very skeptical of even the Kevin Lacker scenarios, but somewhat less so.
A lot of the AI risk discussion I've seen focuses on hypotheticals, or theories of alignment, or distant future low-probability scenarios.
I'd like to ask... (read more)
This is not a great No bet at current odds even if you are certain the event will not happen. The market resolves Dec 31, which means that you have to lock up your cash for about 9 months for about a 3% rate of return. The best CDs are currently paying around 4-4.5% for 6mo-1y terms. So even for people who bought No at 96% it seems like a bad trade, since you're getting less than the effective risk-free rate, and you're not getting compensated for the additional idiosyncratic risk (e.g. Polymarket resolves to yes because shenanigans, polymarket gets hacked, etc).