XiXiDu comments on David Deutsch on How To Think About The Future - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (197)
I just realized you tried to make a different point here. That one can prove the behavior of computationally unpredictable systems. Reminds me of the following:
Sounds reasonable but I have no idea to what extent one could prove "friendliness" while retaining a degree of freedom that would allow a seed AI to recursively-selfimprove towards superhuman intelligence quickly. Intuitively it seems to me that the level of abstraction of a definition of "friendliness" will be somehow correlated with the capability of an AGI.