The AI is a real-time algorithm - it has to respond to situation in the real time. The real-time systems have to trade time for accuracy, and/or face deadlines.

The straightforward utility maximization may look viable for multiple choice questions, but for write-in problems, such as technological innovation, the number of choices is so huge (1000 variables with 10 values each, 101000) , that the AI of any size - even galaxy spanning civilization of Dyson spheres - has to employ generative heuristics. Same goes for utility maximization in presence of 1000 unknowns that have 10 values each - if the values are to interact non-linearly, all the combinations, or a representative number thereof, have to be processed. There one has to trade accuracy of processing utility of a case for number of cases processed.

In general, the AIs of any size (excluding the possibility of unlimited computational power within finite time and space) will have to trade accuracy of it's adherence to it's goals, for time, and thus have to implement methods that have different goals, but are faster computationally, whenever those goals are reasoned to increase expected utility taking into consideration the time constraints.

Note that in a given time, the algorithm with lower big-O complexity is able to process dramatically larger N, and the gap increases with the time allocated (and with CPU power). For example, you can bubblesort number of items proportional to square root of the number of operations, but you can quicksort the number of items proportional to t/W(t) where W is the product-log function and t is the number of operations; this grows approximately linearly for large t. So for the situations where exhaustive search is not possible, gaps between implementations increases with extra computing power; the larger AIs benefit more from optimizing themselves.

The constraints get especially hairy when one is to think of massively parallel system that is operating with speed-of-light lag between the nodes, and where the time of retrieval is O(n1/3) .

This seems to be a big issue for FAI going FOOM. The FAI may, with perfectly friendly motives, abandon the proved-friendly goals for the simpler to evaluate, simpler to analyze goals that may (with 'good enough' confidence that needs not necessarily be >0.5) produce friendliness as instrumental, if that increases the expected utility given the constraints. I.e. the AI can trade 'friendliness' for 'smartness' when it expects the 'smarter' self to be more powerful, but less friendly, when this trade increases the expected utility.

Do we accept such gambles as inevitable in the process of the FAI? Do we ban such gambles, and face the risk that uFAI (or any other risk) may beat our FAI even if starting later?

In my work as graphics programmer, I am often facing specifications which are extremely inefficient to precisely comply with. The Maxwell's Equations are an extreme example of this. Too slow to process to be practical for computer graphics. I often have to implement code which is uncertain to comply well with specifications, but which would get the project done in time - I can't spend CPU-weeks rendering an HD image for cinema at the ridiculously high resolution which is used - much less so in the real time software. I can't carelessly trade CPU time for my work time, when the CPU time is a major expense, even though I am well paid for my services. One particular issue is with applied statistics. Photon mapping. The RMS noise falls off as 1/sqrt(cpu instructions) , the really clever solutions fall off as 1/(cpu instructions) , and the gap between naive, and efficient implementation has been increasing due to Moore's law (we can expect it to start decreasing some time in the far future when the efficient solutions are indiscernible from reality without requiring huge effort on the part of the artists; alas, we are not quite there yet, and it is not happening for another decade or two).

Is there a good body of work on the topic? (good work would involve massive use of big-O notation and math)

edit: ok, sorry, period in topic.

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 11:34 PM

In general, the AIs of any size (excluding the possibility of unlimited computational power within finite time and space) will have to trade accuracy of it's adherence to it's goals, for time, and thus have to implement methods that have different goals, but are faster computationally, whenever those goals are reasoned to increase expected utility taking into consideration the time constraints.

Sure. "Pragmatic goals" - as I refer to them here.

This seems to be a big issue for FAI going FOOM.

Not really. You don't build systems that use the pragmatic goals when modifying themselves.

Doesn't this merely meta sidestep the issue? Now what the AI needs to do is modify itself to use pragmatic goals when modifying itself, to any level of recursion, and then the situation becomes unified with Dmytry's concern.

If you were to make even considering such solutions have severely negative utility and somehow prevent pragmatic modifications you are effectively reducing its available solution space. The FAI has a penalty that an uFAI doesn't. Potential loss to an uFAI may always have a higher negative utility than becoming more pragmatic. The FAI may evolve not to eliminate its ideals but it may evolve to become more pragmatic, simplifying situations for easier calculation else be severely handicapped against an uFAI.

How can you prove that in time the FAI does not reduce to an uFAI or be quickly rendered to a less steep logistic growth wrt uFAI? That its destiny is not to be come an antihero, a Batman whose benefits are not much better than its consequences?

Doesn't this merely meta sidestep the issue? Now what the AI needs to do is modify itself to use pragmatic goals when modifying itself, to any level of recursion, and then the situation becomes unified with Dmytry's concern.

So: the machine distinguishes between the act of temporarily swapping in or out an approved pragmatic goal (in order to quickly burn through some instrumental task, for example) from more serious forms of self modification.

How can you prove that in time the FAI does not reduce to an uFAI or be quickly rendered to a less steep logistic growth wrt uFAI?

Eeek! Too much FAI! I do not see how that idea has anything to do with this discussion.

You don't build systems that use the pragmatic goals when modifying themselves.

Are you sure they can foom then? If so, why? The foom is already not very certain; add constraints and you can get something that doesn't get a whole lot smarter as it gets a lot more computationally powerful; picture Dyson sphere that loses to you in any game with many variables (go), because it won't want to risk creating a boxed AGI and letting it talk to you. I've seen naive solutions running for 10 days lose to advanced solutions running for 10 seconds on accuracy. It only gets worse as you scale up.

Are you sure it is even valid choice to forbid the FAI to go down this road? It can have really good reasons to do so. You may get FAI that is actively thinking how to get rid of your constraint, because it turns out to be very silly and logically inconsistent with friendliness.

edit: also, its wrong to call it pragmatic goals. It's a replacement of goal or utility driven system with something entirely different.

Are you sure they can foom then? If so, why?

Those questions seem poorly defined to me. Such machines will be able to do useful work, thereby contributing to the self-improvement of the global ecosystem.

also, its wrong to call it pragmatic goals. It's a replacement of goal or utility driven system with something entirely different.

A "pragmatic goal" (or "pragmatic utility function") is the best name I have come across so far for the concept.

However, I am open to recommendations from anyone who thinks they can come up with superior terminology.

I don't see how what you said is much of an objection. The resulting system will still be goal-oriented (that's the whole point). So, we can still use goal-talk to describe it.

On terminology. I would call that a 'solution', in general.

Let me link a programming contest:

http://community.topcoder.com/longcontest/stats/?module=ViewOverview&rd=12203

Your job is to identify which part of image has one texture, and which has another, in monochromatic image.

The solutions are ranked based on accuracy when processing a huge set of tens thousands images generated by the contest organizers. Maximization of this accuracy is your goal, which the solution nowhere ever evaluates, for the lack of data. Not does my head ever evaluate this 'utility' to come up with the algorithm (even though I ran some hill climber to tweak the parameters, which did, that was non essential). No, I just read a lot of stuff about diverse topics, like human vision, and I had general idea of how human vision implements this task, and I managed to code something inspired by it.

This is precisely the sort of work that you would prevent AI from doing, by requiring it to stick to straightforward utility maximization without heuristics. There is something on the order of 2^10000 choices to choose from here (for 100x100 image); i can do it because i don't iterate over this space. If you're to allow heuristics not for 'self modification', the AI may make the pragmatic-AI that will quickly outsmart the creator.

This is precisely the sort of work that you would prevent AI from doing, by requiring it to stick to straightforward utility maximization without heuristics.

I don't think I ever said not to use heuristics. The idea I was advocating was not to use a pragmatic utility function - one adpoted temporarily for the purpose of getting some quick and-dirty work - for doing brain surgery with.

If you're to allow heuristics not for 'self modification', the AI may make the pragmatic-AI that will quickly outsmart the creator.

So, I'm not quite sure where you are going - but it is important to give machine intelligences a good sense of identity - so they don't create an army of minions which don't labour under their constraints.

That seems to be a whole other issue...

A very brief comment from skimming your article:

"Any computable agent may described using a utility function"

This function is if x=A() then return 1 else return 0 . where A is agent and x is the action it chooses. True but generally meaningless.

So: the main point of having a single unified framework in which the utility function of arbitrary agents can be expressed is to measure utility functions and facilitate comparisons between them. It also illustrates that the idea of building a programmable intelligent machine - and then plugging a utility function into it - is quite a general one.

There's near-constant "whining" on the 'net about humans not having utility functions. "We're better than that", or "we're too irational for that", or "we're not built that way" - or whatever. My page explains what such talk is intended to mean.

It also illustrates that the idea of building a programmable intelligent machine - and then plugging a utility function into it - is quite a general one.

No. The utility function in question is a copy of the agent, and the utility of 1 for doing what the agent does, and 0 for doing what the agent does not do, compelling one to just do what the agent does.

With humans for example, it means that the human utility function may not be simpler than human brain. You have an utility function - it is 1 for doing what you want to do, and 0 for doing what you don't want to do. You can, of course, write an agent that has same utility function as you, but it will work by simulating you, and there is no proof that you can make agent simpler than this.

It also illustrates that the idea of building a programmable intelligent machine - and then plugging a utility function into it - is quite a general one.

No. The utility function in question is a copy of the agent, and the utility of 1 for doing what the agent does, and 0 for doing what the agent does not do,

To reiterate the intended point, the idea that "Any computable agent may described using a utility function" illustrates that the idea of building a programmable intelligent machine - and then plugging a utility function into it - is quite a general one.

With humans for example, it means that the human utility function may not be simpler than human brain. [...]

That is, of course, untrue. The human brain might contain useless noise. Indeed, it seems quite likely that the human utility function is essentially coded in the human genome.

That is, of course, untrue. The human brain might contain useless noise.

But how much simpler would utility be? What if it is 10^15 operations per moral decision (i mean, moral comparison between two worlds). Good luck using it to process different choices for a write-in problem.

Indeed, it seems quite likely that the human utility function is essentially coded in the human genome.

Why not in the very laws of universe, at that point? The DNA is not blueprint, it's a recipe, and it does not contain any of our culture.

edit:

To reiterate the intended point, the idea that "Any computable agent may described using a utility function" illustrates that the idea of building a programmable intelligent machine - and then plugging a utility function into it - is quite a general one.

None of that. For agents that don't implement maximization of simple utility, the utility function 'description' which was mathematically proven, includes complete copy of the agent and you gain nothing what so ever by plugging it into some utility maximizer. You just have the maximizer relay the agent's actions, without doing anything useful.

Indeed, it seems quite likely that the human utility function is essentially coded in the human genome.

Why not in the very laws of universe, at that point? [...]

That is not totally impossible. The universe does seem to have some "magic numbers" - which we can't currently explain and which contain significant complexity. The "fine structure constant" for example. In principle, the billionth digit of this could contain useful information about the human utility function, expressed via the wonders of the way chaos theory enables small changes to make big differences.

However, one has to ask: how plausible this is. More likely that the physical constants are not critical beyond a few dozen decimal places. In which case, the laws of the universe look as though they are probably effectively small - and then the human utility function seems unlikely to fit into a description of them.

The point is that laws of the universe, lead to humans, via repeated application of those laws. DNA, too, leads to humans, via repeated use of that DNA (and above-mentioned laws of physics), but combined with the environment and culture. I don't sure that we would like the raw human utility function, sans culture, to be used for any sort of decisions. There's no good reason to expect the results to be nice, given just how many screwed up things other cultures did (look at Aztec)

I don't deny that culture has an influence over what humans want. That's part of what got the "essentially" put into my statement - and emphasised.

In any case, calculating the human utility from DNA, given that DNA is not a blueprint, would involve embryonic development simulation followed by brain simulation.