Warrigal comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong

29 Post author: AnnaSalamon 01 December 2009 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 01 December 2009 11:01:00PM 5 points [-]

I really like what SIAI is trying to do, the spirit that it embodies.

However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).

You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don't see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world would convince the SIAI to abandon the search for a fixed decision theory as a module of the AI. And why isn't SIAI looking for the evidence, to make sure that you aren't wasting your time?

For every Einstein that makes the "right" cognitive leap there are probably many orders of magnitudes of more Kelvin's that do things like predict that meteors provide fuel for the sun.

How are you going to winnow out the wrong ideas if they are consistent with everything we know, especially if they are pure mathematical constructs.

Comment author: [deleted] 03 December 2009 02:56:53AM 2 points [-]

If a wrong idea is both simple and consistent with everything you know, it cannot be winnowed out. You have to either find something simpler or find an inconsistency.