shminux comments on Practical tools and agents - Less Wrong

3 Post author: private_messaging 12 May 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: shminux 12 May 2012 10:19:01PM *  2 points [-]

If you want your post to have some substance, you ought to address the issue of a "practical tool" vs AI oracle and why the former is less dangerous. HK had a few points about that. Or maybe I don't grasp your point about the difference in the utility function.

(One standard point is that, for an Oracle to be right, it does not have to be a good predictor, it has to be a good modifier, so presumably you want to prove that your approach does not result in a feedback loop.)

Comment author: private_messaging 13 May 2012 07:18:34AM *  4 points [-]

That is meant to be informative to those wondering what Holden was talking about. I do not know what do you mean by 'some substance'.

edit: Actually, okay, I should be less negative. It probably is a case of accidental/honest self deception here, and the technobabbling arose by honest attempt to best communicate the intuitions. You approach problem from direction - how do we make a safe oracle out of some AGI model that runs in your imagination reusing the animalism to predict it. Well you can't. That's quite true! However, the actual software is using branches and loops and arithmetic, it's not run on animalism module of your brain, it's run on computer. There's this utility function, and there's the solver which finds maximum of it (which it just does, it's not trying to maximize yet another utility ad infinitum, don't model it using animalism please), and together they can work like animalist model, but the solver does NOT work as animalist model does.

edit: apparently animalism is not exactly the word I want. The point is, we have a module in our brain made for predicting other agents of a very specific type (mammals) and it is a source of some of the intuitions about the AI.

former post:

Ultimately, I figured out what is going on here. Eliezer and Luke are two rather smart technobabblers. That really is all to it. Not explaining anything about the technobabble. Not worth it. The technobabble is what results when one is thinking in terms of communication-level concepts rather than in terms of reasoning-level concepts, within a technical field. The technobabble can and has been successfully published in peer reviewed journals ( Bogdanov affair ) and, sadly, can even be used to acquire a PhD.

The Oracle AI as defined here is another thing defined in terms of the 'utility' as known on LW and has nothing to do with the solver component of the agents as currently implemented.

The bit about predictor having implicit goal to make world match predictions is utter and complete bullshit not even worth addressing. It arises from thinking in terms of communication level concepts such as 'should' and 'want', which can be used to talk about AI but can not be used to reason about AI.

Comment author: shminux 13 May 2012 07:31:38AM *  1 point [-]

While I understand your frustration, in my experience, you will probably get better results on this forum with reason rather than emotion.

In particular, when you say

Eliezer and Luke are two rather smart technobabblers.

you probably mean something different from what is used on Sci-fi shows.

Similarly,

The implicit goal about world having to match predictions is utter and complete bullshit not even worth addressing.

comes across as a rant, and so is unlikely to convince your reader of anything.

Comment author: private_messaging 13 May 2012 07:36:47AM *  2 points [-]

The problem is that it really is utter and complete bullshit. I really do think so. On the likelihood to convince: there's the data point: someone called it bullshit. That's probably all the impact that could possibly be made (unless speaking from position of power).

With the technobabble, I do mean as used in science fiction when something has to be explained. Done with great dedication (more along the lines of wiki article I linked).

edit: e.g. you have animalist (desires) based intuition of what AI will want to do - obviously the AI will want to make it's prediction come true in the real world (it well might if it is a mind upload). That doesn't sound very technical. You replace want with 'utility', replace a few other things with technical looking equivalents, and suddenly it sounds technical to such a point that experts don't understand what you are talking about but don't risk assuming that you are talking nonsense rather than badly communicating some sense.

Comment author: Luke_A_Somers 16 May 2012 10:38:43AM 0 points [-]

Ohkay... but... if you're using a utility-function-maximizing system architecture, that is a great simplification to the system that really give a clear meaning to 'wanting' things, in a way that it doesn't have for neural nets or whatnot.

The mere fact that the utility function to be specified has to be far far more complex for a general intelligence than a driving robot doesn't change that. The vagueness is a marker for difficult work to be done, not something they're implying they've already done.