Posts

Sorted by New

Wiki Contributions

Comments

it's not a very obvious example

I honestly regret that I didn't make it as clear as I possibly could the first time around, but expressing original, partially developed ideas is not the same thing as reciting facts about well-understood concepts that have been explained and re-explained many times. Flippancy is needlessly hostile.

there are some problems to which search is inapplicable, owing to the lack of a well-defined search space

If not wholly inapplicable, then not performant, yes. Though the problem isn't that the search-space is not defined at all, but that the definitions which are easiest to give are also the least helpful ( to return to the previous example, in the Platonic real there exists a brainf*ck program that implements an optimal map from symptoms to diagnoses - good luck finding it ). As the original author points out, there's a tradeoff between knowledge and the need for brute-force. It may be that you can have an agent synthesize knowledge by consolidating the results of a brute-force search into a formal representation which an agent can then use to tune or reformulate the search-space previously given to fit some particular purposes; but this is quite a level of sophistication above pure brute force.

Edit:

this is not an issue with search-based optimization techniques; it's simply a consequence of the fact that you're dealing with an ill-posed problem

If the problems of literature or philosophy were not in some sense "ill posed" they would also be dead subjects. The 'general' part in AGI would seem to imply some capacity for dealing with vague, partially defined ideas in useful ways.

for more abstract domains, it's harder to define a criterion (or set of criteria) that we want our optimizer to satisfy

Yes.

But there's a significant difference between choosing an objective function and "defining your search space" (whatever that means), and the latter concept doesn't have much use as far as I can see.

If you don't know what it means, how do you know that it's significantly different from choosing an "objective function" and why do you feel comfortable in making a judgment about whether or not the concept is useful?

In any case, to define a search space is to provide a spanning set of production rules which allow you to derive all elements in the target set. For example, Peano arithmetic provides a spanning set of rules for arithmetic computations, and hence define ( in one particular way ) the set of computations a search algorithm can search through in order to find arithmetic derivations satisfying whatever property. Similarly the rules of chess define the search-space for valid board-state sequences in games of chess. For neural networks, it could be defining a set of topologies, or a set of composition rules for layering networks together; and in a looser sense a loss function induces "search space" on network weights, insofar as it practically excludes certain regions of the error surface from the region of space any training run is ever likely to explore.

So is brainf*ck, & like NNs bf programs are simple in the sense of being trivial to enumerate and hence search through. Defining a search space for a complex domain is equivalent to defining a subspace of BF programs or NNs which could and probably does have a highly convoluted, warped separating surface. In the context of deep learning your ability to approximate that surface is limited by your ability to encode it as a loss function.

It only makes sense to talk about "search" in the context of a *search space*; and all extent search algorithms / learning methods involve searching through a comparatively simple space of structures, such as the space of weights on a deep neural network or the space of board-states in Go and Chess. Defining these spaces is pretty trivial. As we move on to attack more complex domains, such as abstract mathematics, or philosophy or procedurally generated music or literature which stands comparison to the best products of human genius, the problem of even /defining/ the search space in which you intend to leverage search-based techniques becomes massively involved.

The strength of the claim being made by Slashdot and the lack of any examination of ways in which it could be false by whoever wrote Slashdot's summary both invite skepticism.

I'm of the opinion that we are in base reality regardless, though. The reason for this being is that the incentive for running a simulation is so that you can observe the behavior of the system being simulated. If you have some vertical stack of simulations all simulating intelligent agents in a virtual world, and most of these simulations are simulating basically the same thing, that makes simulation very costly because the 0th-level simulators won't learn anything from a simulation being run by the simulants that they won't learn from the "base-level" simulation. They would have an incentive to develop ways to starve non-useful simulant activity of computing resources.

The connection between neuroses and memories was something that made me think a lot. I've been trying to provoke myself into some kind of "transformation" for about 10 years, with some limited successes and a lot of failures for a want of insight. Information like this is really valuable so thank you for sharing your experience.

Given that world GDP growth continues for at least another century, 100%. :)

It is impossible for one to act on another's utility function (without first incorporating it into their own utility function).

This seems tautological and trivially so. Whatever utility function you act on becomes by virtue of that fact "your" utility function.

these laws are exactly the outside world

That is my view precisely. One way out is to assert that there is at least one mind responsible for providing the percepts available to other minds, and from its perspective nothing is unknown and it fills the function of the "outside world".

Load More