I've read so many posts highlighting the dangers of AGI that I often feel terribly anxious about it. I'm pretty young, and the idea that there's a possible utopia waiting for us that seems to be slipping through our fingers kills me. But even more than that, I worry that I won't have the chance to enjoy much of my life. That the work I've put in now won't amount to much, and that the relationships I've cultivated will never really get the chance to grow for the decades that should be every human's right.
Even just earlier today, I was reading an article whe...
I think the general point he's making still stands. You can always choose to remove the Werewolf Contract of your own volution, then force any sort of fever dream or nightmare onto yourself.
Moreover, The Golden Age also makes a point about the dangers of remaining unchanged. Orpheus, the most wealthy man in history, has modified his brain such that his values and worldview will never shift. This puts him in sharp contrast to Phaethon as the protagonist, whose whole arc is about shifting the strict moral equilibrium of the public to make important change happen. Orpheus, trapped in his morals, is as out of touch in the era of Phaethon as would be a Catholic crusader in modern Rome.
I think the example of humans militaries to ants is a bit flawed, for two main reasons.
1. Ants don't build AGI - Humans don't care about ants because they're so uncoordinated in comparison, and can't pose much of a threat. Humans can pose a significant threat to an ASI - building another ASI.
2. Ants don't collect gold - Humans, unlike ants, control a lot of important resources. If every ant nest was built on a pile of gold, you can best believe humans would actively look for and kill ants. Not because we hate ants, but because we want their gold. An unaligned ASI will want our chip factories, our supply chains, bandwidth, etc. All of which we would be much better off keeping.
Thank you for posting this. Are there any opportunities for students about to graduate to apply themselves, particularly without a C.S background? My undergraduate experience was focused on Business and IR (Cold War history, Sino-U.S relations) before I pivoted my long term focus to AI safety policy, and it's been difficult to find good entry points for EA work in this field as a new grad.
I've been monitoring 80,000 hours and applying to research fellowships where I can so far, but I'm always looking for new positions. If you or anyone else knows an org looking to onboard some fresh talent, I'd be happy to help.
Edit: Application submitted.