>>But I don't think it is likely that adding an extra 9 to its chance of victory would take centuries.
This is one point I think we gloss over when we talk about 'an AI much smarter than us would have a million ways to kill us and there's nothing we can do about it, as it would be able to perfectly predict everything we are going to do'. Upon closer analysis, this isn't precisely true. Life is not a game of chess; first, there are infinite instead of finite future possibilities, so no matter how intelligent you are, you can't perfectly anticipate all ...
Agree. Tried to capture this under 'Potential Challenges' item #4. My hope is that people would value the environment and sustainability beyond just their own short term interests, but it's not clear whether that would happen to a sufficient degree.
HI everyone, I'm Leo. I've been thinking about the AI existential threat for several years (since I read Superintelligence by Bostrom), but much more so recently with the advent of ChatGPT. Looking forward to learning more about the AI safety field and openly (and humbly) discussing various ideas with others here!
>>If things happen the right way, we will get a lot of freedom as a consequence of that. But starting with freedom has various problems of type "my freedom to make future X is incompatible with your freedom to make it non-X".
Yes, I would anticipate a lot of incompatibilities. But the ASI would be incentivized to find ways to optimize for both people's freedom in that scenario. Maybe each person gets 70% of their values fulfilled instead of 100%. But over time, with new creativity and new capabilities, the ASI would be able to nudge that to 75%, and t... (read more)