Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: VipulNaik 17 April 2014 03:46:29PM *  0 points [-]

Sorry for my lack of clarity.

I was making the point that if the run time can be broken down in a form where it's expressed as a sum of products, where the summands are the times taken for some sub-algorithms, then we can attempt to tweak the sub-algorithms to reduce the time taken for the individual tasks.

The time taken for the sub-algorithm may or may not be polynomial in the input dimensions.

Comment author: V_V 17 April 2014 03:57:26PM 0 points [-]

Ok.

Comment author: VipulNaik 16 April 2014 07:17:03PM 1 point [-]

Thanks for your thoughts.

You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That's a strong assumption.

I'm not assuming that.

In fact, my analysis isn't asymptotic at all, but rather, for fixed (but large) input sizes. In the asymptotic setting, moving to a new algorithm can yield an improvement that is no longer in the same order equivalence class (I could have elaborated more on this, but this was already a PS).

What do you mean by "improve any one factor"? In a complexity function, the variables represent the relevant dimensions of the input. How do you "improve" them?

I meant "factor" in the "part of a multiplication" sense. For instance, let's say you have an algorithm that has an outer loop A that calls an inner loop B every step. Then if a and b are the number of steps respectively, the total time is ab. Now, reducing either a or b by a given factor would reduce the product. That reduction by a factor could be through the identification of steps that turn out to be unnecessary.

And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.

I am not under this impression. Sorry for the confusion.

Algorithms can't be improved indefinitely, nor can population or the economy. But we can still talk of exponential growth over a range of time as we chip off at the obvious issues.

Comment author: V_V 17 April 2014 03:22:43PM 0 points [-]

I'm not assuming that.

"let's assume that the time taken is a sum of products"

This is the definition of a polynomial, although you might not have intended it to be a polynomial in something other than the input dimensions. In that case, I'm not sure I've understood clearly what you are arguing.

Comment author: V_V 16 April 2014 05:19:05PM 0 points [-]

For simplicity, let's assume that the time taken is a sum of products that are all of the same order as one another.

You are assuming that the algorithm runs in polynomial time w.r.t. the input dimensions. That's a strong assumption.

To improve a particular summand by a particular constant of proportionality, we may improve any one factor of that summand by that constant of proportionality. Or, we may improve all factors of that summand by constants that together multiply to the desired constant of proportionality.

What do you mean by "improve any one factor"? In a complexity function, the variables represent the relevant dimensions of the input. How do you "improve" them?

And more generally, you seem to be under the impression that algorithms can be improved indefinitely, as evidenced by the fact that you are mentioning algorithm improvement in a post about exponential growth.
They can't: Complexity is bounded from below, which means that for each problem there exist a maximally efficient algorithm. And similarly, once you specify the details of the hardware, there exists a maximally efficient program for that specific hardware. Once you get there you can't improve any further.

Comment author: So8res 15 April 2014 10:36:07PM *  1 point [-]

See the Death to AIXI section in the post linked above.

Comment author: V_V 16 April 2014 12:34:12AM 1 point [-]

I don't think it addresses the point, and as you can see in the comment section of that post, I've already raised this objection there, as other users did, including Cyan.

Comment author: So8res 15 April 2014 07:51:47PM 2 points [-]

Actually, an AI that believes it only communicates with the environment via input/output channels cannot represent the hypothesis that it will stop receiving input bits. See this post for a discussion of the finer details.

Comment author: V_V 15 April 2014 09:57:39PM 1 point [-]

It can represent the hypothesis that if something happens then the string of input bits it will receive will be some default "no signal" symbol or random static, and most importantly that the string of output bits it will produce will be ignored.

Comment author: Gunnar_Zarncke 14 April 2014 04:21:55PM 0 points [-]

Interesting I wasn't aware of that. Does it hold only for sufficiently complex games? Can you give a reference?

Comment author: V_V 14 April 2014 07:59:59PM *  1 point [-]

Hmm, if you were talking about "rock-paper-scissors" as an example of games without a pure Nash equilibrium, then some games have it and some don't. Intuitively, the more complex the game (the larger the number of pure strategies) the less likely that there is a pure Nash equilibrium, but that's not a strict rule.
However, under weak assumptions, there is always at least one, generally mixed, Nash equilibrium.

If by "winning strategy" you meant a mixed strategy that guarantees a greater than zero payoff, then in a two-player symmetric zero-sum game it can't exist:
if it existed both player would use it at the Nash equilibrium, and the sum of their expected payoffs would be greater than zero.

Comment author: So8res 13 April 2014 04:26:56PM 0 points [-]

From the technical report:

In this report, the register machines use a very simple instruction set which we call the constree language. A full implementation can be found in Appendix B. However, when modelling concrete decision problems in Botworld, we may choose to replace this simple language by something easier to use. (In particular, many robot programs will need to reason about Botworld's laws. Encoding Botworld into the constree language is no trivial task.)

Comment author: V_V 13 April 2014 05:09:07PM 0 points [-]

So next version will accept robot programs written in Coq, I suppose ;)

Comment author: So8res 12 April 2014 05:27:15PM *  7 points [-]

You may have misunderstood the point of Botworld. It is a tool for understanding our progress on open problems in FAI, not a tool for solving such problems directly.

We're trying to learn more about naturalistic agents and overcome certain obstacles of self-reference. Many classical models of machine intelligence fall short in counter-intuitive ways, and many proposed solutions are quite abstract. Botworld gives us a way to concretely illustrate both the flaws and the proposals.

We release Botworld not because it's the cutting edge of our research, but because when we do start talking about the research we've been doing it will be helpful to have a concrete way to illustrate the problems that we have found or the solutions that we're exploring.

Comment author: V_V 13 April 2014 10:50:30AM *  -1 points [-]

I appreciate the effort, but you want to study agent that solve problem by using mutual program analysis and self-modification (in the form of generating different successors). Will come up with non-trivial examples where such strategies pay off in your simulator?
It seems quite hard to me. Due to technical limitations, anything involving automated theorem proving or complex planning is going to be off the table.

Comment author: Thomas 12 April 2014 07:49:31AM -1 points [-]

Might be interesting, but it is rather trivial for now. Decades ago, some more advanced projects have been undertaken.

MIRI should be on the cutting edge of AI progress, at least it perceives itself as such, but this is far from an edge.

Comment author: V_V 12 April 2014 02:38:42PM 1 point [-]

Might be interesting, but it is rather trivial for now. Decades ago, some more advanced projects have been undertaken.

Yep. Core War has already been mentioned. I'll add Tierra, Avida and Nanopond (which even features "abiogenesis").

MIRI should be on the cutting edge of AI progress, at least it perceives itself as such, but this is far from an edge.

Indeed.

Comment author: Gunnar_Zarncke 11 April 2014 06:33:54AM *  4 points [-]

Reminds me of Core War. Only that the cells were single memory locations. There are no 'actors' - if you could call it that because cells had no owner; only execution threads have.

Maybe some insights can be derived by comparing the significant experience with Core Wards programs. For example There was no single winning strategy but kind of rock-paper-scissors types of programs. It also allos analysis and exploitation of opponents code - albeit at a very low level.

ADDED: The advantage of this model (which has been used to study evolution of actors) is that it is extremely simple. The disadvantage is that the effect per computation is very high: A simple loop can alter a sizable fraction of the 'world' within short time. Thus no complex analysis of opponents never pays off (except for tests like 'is some opponent at this address').

I like your bot-world as it is kind of a hybrid of other bot worlds and core wars in that it does allow inspection. But I think it's complexity (robots, parts, items, inspection, cells; esp. the winning condition) doesn't cut directly to the problem at hand. The key point missing in core wars is a limit to physical reach. Using a limit to the range of mov-instructions would have made a significant difference - even without going to 2D or 3D (actually that complicates reasoning). Or if mov instructions took more time the further they reach.

I agree that a more gradual winning criteria than dead/alive (as in core wars) would help direct the search for more efficient programs.

See also: The AI Challenge - Ants Article

Comment author: V_V 11 April 2014 04:01:50PM 1 point [-]

For example There was no single winning strategy but kind of rock-paper-scissors types of programs.

That's true for any symmetric zero-sum game.

View more: Next