Vladimir_Nesov comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (682)
Isn't it too early to start solving this problem? There is a good chance SIAI won't even have a direct hand in programming the FAI.
That's what I've been told, but I'm not entirely convinced. Since there are so many timelines out there, and since fundamental breakthroughs are hard to predict, I think it still deserves some attention as soon as possible, if only to know what to do if things start moving rapidly (an AGI team might not have many chances to recover from security mistakes).
I'll broaden my question a bit so that it applies to all people working on AGI and not just the SIAI.
Care to elaborate?
Why? It's not like SIAI is on a teleological track to be the one true organization to actually save the world. They have some first-mover advantage to be the focus of this movement, to the extent it's effective in gravitating activity their way. They are currently doing important work on spreading awareness. But if things catch up, others will start seriously working on the problem elsewhere.
By things catching up, you mean awareness spreading, right? It doesn't seem like a stretch to guess that SIAI will continue to do a large portion of that.
There's no advantage associated with FAI programmers starting a second group if they know they'll get funded by SIAI and don't have any major disagreements with SIAI's philosophy.
Not a rule strictly followed by how things work out in practice.