PhilGoetz comments on 30th Soar workshop - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (18)
I agree with all of your summaries, so any readers should partially discount the many occurrences of "so far as I can tell." :)
How do you respond to the claim that the few researchers who are working directly on very general, Solomonoff-approximating AI systems are, when put together, dangerous?
Can you list some? If you mean AIXI-like systems, I see them as a new type of complexity theory used to help us understand the problem space, but not as a cognitive architecture that might be used in an AGI. Nothing using such an exhaustive approach could operate in the real world.
Sorry for the slow response. Solomonoff, until he died recently. Marcus Hutter is implementing an AIXI approximation last I heard. Eray Ozkural, implementing Solomonoff's ideas. Sergey Pankov, implementing an AIXI approximation.