Let's say that the tool/agent distinction exists, and that tools are demonstrably safer. What then? What course of action follows?
Should we ban the development of agents? All of human history suggests that banning things does not work.
With existential stakes, only one person needs to disobey the ban and we are all screwed.
Which means the only safe route is to make a friendly agent before anyone else can. Which is pretty much SI's goal, right?
So I don't understand how practically speaking this tool/agent argument changes anything.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I am completely lost by how this is a response to anything I said.
It's not. Apparently I somehow replied to the wrong post... It's actually aimed at sufferer's comment you were replying to.
I don't suppose there's a convenient way to move it? I don't think retracting and re-posting would clean it up sufficiently, in fact that seems messier.