Vladimir_Nesov comments on GiveWell interview with major SIAI donor Jaan Tallinn - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (8)
These ideas might inform the exchange:
I think Holden is making the point that the work SIAI is trying to do (i.e. sort out all the issues of how to make FAI) might be so much easier to do in the future with the help of advanced narrow AI that it's not really worth investing a lot into trying to do it now.
Note: for anyone else who'd been wondering about Eliezer's position on Oracle AI, see here.
...
A powerful machine couldn't give a human "significant power"?!? Wouldn't Page and Brin be counter-examples?
One problem with an unethical ruler is that they might trash some fraction in the world in the process of rising to power. For those who get trashed, what the ruler does afterwards may be a problem they are not around to worry about.
You mean you can't think of scenarios where an Oracle prints out complex human-readable designs? How about you put the Oracle into a virtual world where it observes a plan to steal those kinds of design, and then ask it what it will observe next - as the stolen plans are about to be presented to it?