All of jonperry's Comments + Replies

Yes, you can create risk by rushing things. But you still have to be fast enough to outrun the creation of UFAI by someone else. So you have to be fast, but not too fast. It's a balancing act.

5Monkeymind
If intelligence is the ability to understand concepts, and a super-intelligent AI has a super ability to understand concepts, what would prevent it (as a tool) from answering questions in a way so as to influence the user and affect outcomes as though it were an agent?

Let's say that the tool/agent distinction exists, and that tools are demonstrably safer. What then? What course of action follows?

Should we ban the development of agents? All of human history suggests that banning things does not work.

With existential stakes, only one person needs to disobey the ban and we are all screwed.

Which means the only safe route is to make a friendly agent before anyone else can. Which is pretty much SI's goal, right?

So I don't understand how practically speaking this tool/agent argument changes anything.

1Polymeron
Presumably, you build a tool-AI (or three) that will help you solve the Friendliness problem. This may not be entirely safe either, but given the parameters of the question, it beats the alternative by a mile.
1jsteinhardt
I think the idea is to use tool AI to create safe agent AI.
3A1987dM
Only if running too fast doesn't make it easier to screw something up, which it most likely does.