private_messaging comments on Practical tools and agents - Less Wrong

3 Post author: private_messaging 12 May 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 13 May 2012 07:36:47AM *  2 points [-]

The problem is that it really is utter and complete bullshit. I really do think so. On the likelihood to convince: there's the data point: someone called it bullshit. That's probably all the impact that could possibly be made (unless speaking from position of power).

With the technobabble, I do mean as used in science fiction when something has to be explained. Done with great dedication (more along the lines of wiki article I linked).

edit: e.g. you have animalist (desires) based intuition of what AI will want to do - obviously the AI will want to make it's prediction come true in the real world (it well might if it is a mind upload). That doesn't sound very technical. You replace want with 'utility', replace a few other things with technical looking equivalents, and suddenly it sounds technical to such a point that experts don't understand what you are talking about but don't risk assuming that you are talking nonsense rather than badly communicating some sense.

Comment author: Luke_A_Somers 16 May 2012 10:38:43AM 0 points [-]

Ohkay... but... if you're using a utility-function-maximizing system architecture, that is a great simplification to the system that really give a clear meaning to 'wanting' things, in a way that it doesn't have for neural nets or whatnot.

The mere fact that the utility function to be specified has to be far far more complex for a general intelligence than a driving robot doesn't change that. The vagueness is a marker for difficult work to be done, not something they're implying they've already done.