Samantha_Atkins

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Eliezer,

Do you actually believe that it is possible for a mere human being to ever be 100% certain that a given AGI design will not lead to the destruction of humanity? I get the impression that you are forbidding yourself to proceed until you can do something that is likely impossible for any human intelligence to do. In this universe there are not such broad guarantees of consequences. I can't buy into the notion that careful design of initial conditions of the AGI and of its starting learning algorithms are sufficient for the guarantee you seem to seek. Have I misconstrued what you are saying? Am I missing something?

I also don't get why "I need to beat my competitors" is even remotely a consideration when the result is a much greater than human level intelligence that makes the entire competitive field utterly irrelevant. What does it really matter which person or team finally succeeded?