timtyler comments on Muehlhauser-Wang Dialogue - Less Wrong

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 23 April 2012 09:18:55AM 0 points [-]

...read and understand Bostroms AI-behaviour stuff...

What makes you believe that his expected utility calculation of reading Bostroms paper suggests that it is worth reading it?

...and then explain exactly why it is that an AI cannot have a fixed goal architecture.

He answered that in the interview.

Wang needs to show that AI's with unpredictable goals are somehow safe...

He answered that in the interview.

He wrote that AI's with fixed goal architectures can't be general intelligent and that AI's with unpredictable goals can't be guaranteed to be safe but that we have to do our best to educate them and restrict their experiences.

Comment author: timtyler 23 April 2012 07:19:09PM *  2 points [-]

...and then explain exactly why it is that an AI cannot have a fixed goal architecture.

He answered that in the interview.

Yes, but the answer was:

If intelligence turns out to be adaptive (as believed by me and many others), then a “friendly AI” will be mainly the result of proper education, not proper design. There will be no way to design a “safe AI”, just like there is no way to require parents to only give birth to “safe baby” who will never become a criminal.

...which is pretty incoherent. His reference for this appears to be himself here and here. This material is also not very convincing. No doubt critics will find the section on "AI Ethics" in the second link revealing.