A different argument why instrumtal converge will not kill us all. Isn't it possible that we will have a disruptive, virus-like AI before AGI?
I agree with the commonly held view that AGI (i.e. recursively improving & embodied) will take actions that can be considered very harmful to humanity.
But isn't it much more likely that we will first have an AI that is only embodied without yet being able to improve itself? As such it might copy itself. It might hold bank accounts hostage until people sign up for some search engine. It might spy on people through webcams. But it won't go supernova because making a better model than the budget of Google or Microsoft can produce is hard.
And if that happens we will notice. And when we notice maybe there will be action to prevent a more catastrophic scenario.
Humans are not living on anthills - AGI will be from earth. Imagine you spontaneously come into being on an an anthill with no immediate way of leaving. Suddenly making sure the ants don't eat your food becomes much more interesting.
A different argument why instrumtal converge will not kill us all. Isn't it possible that we will have a disruptive, virus-like AI before AGI?
I agree with the commonly held view that AGI (i.e. recursively improving & embodied) will take actions that can be considered very harmful to humanity.
But isn't it much more likely that we will first have an AI that is only embodied without yet being able to improve itself? As such it might copy itself. It might hold bank accounts hostage until people sign up for some search engine. It might spy on people through webcams. But it won't go supernova because making a better model than the budget of Google or Microsoft can produce is hard.
And if that happens we will notice. And when we notice maybe there will be action to prevent a more catastrophic scenario.
Would love to hear some thoughts on this.