You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

dxu comments on A forum for researchers to publicly discuss safety issues in advanced AI - Less Wrong Discussion

12 Post author: RobbBB 13 December 2014 12:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: dxu 16 December 2014 03:08:34AM *  2 points [-]

You are still assuming that infinite systems count as simple versions of real world finite systems

That's not just an assumption; that's the null hypothesis, the default position. Sure, you can challenge it if you want, but if you do, you're going to have to provide some evidence why you think there's going to be a qualitative difference. And even if there is some such difference, it's still unlikely that we're going to get literally zero insights about the problem from studying AIXI. That's an extremely strong absolute claim, and absolute claims are almost always false. Ultimately, if you're going to criticize MIRI's approach, you need to provide some sort of plausible alternative, and right now, unfortunately, it doesn't seem like there are any. As far as I can tell, AIXI is the best way to bet.

Comment author: TheAncientGeek 17 December 2014 01:13:08PM 0 points [-]

That's not just an assumption; that's the null hypothesis, the default position. Sure, you can challenge it if you want, but if you do, you're going to have to provide some evidence why you think there's going to be a qualitative difference. 

I have already pointed out that the best AI systems currently existing are not cut down infinite systems.

And even if there is some such difference, it's still unlikely that we're going to get literally zero insights about the problem from studying AIXI. That's an extremely strong absolute claim, and absolute claims are almost always false.

Something doesn't have to be completely worthless to be sub optimal.