D2AEFEA1 comments on Raising safety-consciousness among AGI researchers - Less Wrong

15 Post author: lukeprog 02 June 2012 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 03 June 2012 03:24:43PM *  5 points [-]

Ohh, that's easily the one on which you guys can do most harm by associating the safety concern with crankery, as long as you look like cranks but do not realize it.

Speaking of which, use of complicated things you poorly understand is a sure fire way to make it clear you don't understand what you are talking about. It is awesome for impressing people who understand those things even more poorly or are very unconfident in their understanding, but for competent experts it won't work.

Simple example [of how not to promote beliefs]: idea that Kolmogorov complexity or Solomonoff probability favours many worlds interpretation because it is 'more compact' [without having any 'observer']. Why wrong: if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI). Why stupid: because if you don't require that, then the iterator through all possible physical theories is the lowest complexity 'explanation' and we're back to square 1. How it affects other people's opinion of your relevance: very negatively for me. edit: To clarify, the argument is bad, and I'm not even getting into details such as non-computability, our inability to represent theories in the most compact manner (so we are likely to pick not the most probable theory but the one we can compactify easier), machine/language dependence etc etc etc.

edit: Another issue: there was the mistake in phases in the interferometer. A minor mistake, maybe (or maybe the i was confused with phase of 180, in which case it is a major misunderstanding). But the one that people whom refrain of talking of the topics they don't understand, are exceedingly unlikely to make (its precisely the thing you double check). Not being sloppy with MWI and Kolmogorov complexity etc is easy: you just need to study what others have concluded. Not being sloppy with AI is a lot harder. Being less biased won't in itself make you significantly less sloppy.

Comment author: D2AEFEA1 04 June 2012 10:28:47AM *  0 points [-]

Most of this seems unrelated to what the OP says. Are you sure you posted this in the right place?

Comment author: private_messaging 04 June 2012 04:44:27PM *  -2 points [-]

Yup. The MWI stuff is just a good local example of how not to justify what you believe. They're doing same with AI what Eliezer did with MWI: trying to justify things they not very rationally believe in with advanced concepts they poorly understand, which works on non-experts only.