Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

# JohnDavidBustard comments on Recommended Reading for Friendly AI Research - Less Wrong

24 09 October 2010 01:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

You are viewing a single comment's thread.

Comment author: 14 October 2010 10:14:31PM 0 points [-]

I am sure these are interesting references for studying pure mathematics but do they contribute significantly to solving AI?

In particular, it is interesting that none of your references mention any existing research on AI. Are there any practical artificial intelligence problems that these mathematical ideas have directly contributed towards solving?

E.g. Vision, control, natural language processing, automated theorem proving?

While there is a lot of focus on specific, mathematically defined problems on LessWrong (usually based on some form of gambling), there seems to be very little discussion of the actual technical problems of GAI or a practical assessment of progress towards solving them. If this site is really devoted to rationality should we not at least define our problem and measure progress towards its solution. Otherwise we risk being merely a mathematical social club, or worse, a probability based religion?

Comment author: 15 October 2010 10:25:05AM *  2 points [-]

The main mystery in FAI, as I currently see it, is how to define its goal. The question of efficient implementation comes after that and depending on that. There is no point in learning how to efficiently solve the problem you don't want to be solved. Hence the study of decision theory, which in turn benefits from understanding math.

See the "rationality and FAI" section, Eliezer's paper for a quick introduction, also stuff from sequences, for example complexity of value.

Comment author: 15 October 2010 06:43:33PM 0 points [-]

Ok, I certainly agree that defining the goal is important. Although I think there is a definite need for a balance between investigation of the problem and attempts at its solution (as each feed into one another). Much as how academia currently functions. For example, any AI will need a model of human and social behaviour in order to make predictions. Solving how an AI might learn this would represent a huge step towards solving FAI and a huge step in understanding the problem of being friendly. I.e. whatever the solution is will involve some configuration of society that maintains and maximises some set of measurable properties from it.

If the system can predict how a person will feel in a given state it can solve for which utopia we will be most enthusiastic about. Eliezer's posts seem to be exploring this problem manually, without really taking a stab at a solution, or proposing a route to reaching one. This can be very entertaining but I'm not sure it's progress.

Comment author: 15 October 2010 07:15:45PM *  1 point [-]

Unfortunately, if you think about it, "predicting how a person feels" isn't really helpful to anything, and doesn't contribute to the project of FAI at all (see Are wireheads happy? and The Hidden Complexity of Wishes, for example).

The same happens with other obvious ideas that you think up in the first 5 minutes of considering the problem, and which appear to argue that "research into nuts and bolts of AGI" is relevant for FAI. But on further reflection, it always turns out that these arguments don't hold any water.

The problem comes down the the question of understanding of what it is exactly you want FAI to do, not of how you'd manage to write an actual program that does that with reasonable efficiency. The horrible truth is that we don't have the slightest technical understanding of what it is we want.

Comment author: 15 October 2010 08:04:23PM 1 point [-]

Here is a more complex variant that I can't see how to dismiss easily.

If you can build a "predict how humans feel in situation x" function, you can do some interesting things. Lets call this function feel(x). Now as well as first order happiness, you can also predict how they will feel when told about situation X, so feel("told about X").

You might be able to recover something like preference if you can calculate feel("the situation where X is suggested and, told about feel(X) and told about all other possible situations") , for all possible situations, as long as you can rank the output of feel(X) in some way.

Well as long as the human simulator predictor can cope with holding in all possible situations, and not return "worn out" for all situations.

Anyway it is an interesting riff off the idea. Anyone see any holes that I am missing?

Comment author: 15 October 2010 08:09:25PM *  0 points [-]

Try to figure out what maximizes this estimate method. It won't be anything you'd want implemented, it will be a wireheading stimulus. Plus, FAI needs to valuate (and work to implement) whole possible worlds, not verbal descriptions. And questions about possible worlds involve quantifies of data that a mere human can't handle.

Comment author: 15 October 2010 09:11:33PM *  1 point [-]

Try to figure out what maximizes this estimate method. It won't be anything you'd want implemented, it will be a wireheading stimulus.

I'm not sure that there is a verbal description of a possible world that is also a wirehead stimulus for me. There might be, which might be enough to discount this method.

And questions about possible worlds involve quantifies of data that a mere human can't handle.

True.

Comment author: 16 October 2010 10:29:33AM 0 points [-]

I'm not sure I understand the distinction between an answer that we would want and a wireheading solution. Are not all solutions wireheading with an elaborate process to satisfy our status concerns. I.e. is there a real difference between a world that satisfies what we want and directly altering what we want? If the wire in question happens to be an elaborate social order rather than a direct connection why is that different? What possible goal could we want pursued other than the one which we want?

Comment author: 16 October 2010 12:27:16PM *  0 points [-]

is there a real difference between a world that satisfies what we want and directly altering what we want?

From an evolutionary point of view those things that manage to procreate will out compete those things that change themselves to not care about that and just wirehead.

So in non-singleton situations, alien encounters and any form of resource competition it matters whether you wirehead or not. Pleasure, in an evolved creature, can be seen as the giving (very poor) information on the map to the territory of future influence for the patterns that make up you.

Comment author: 16 October 2010 03:37:03PM 0 points [-]

So, assuming survival is important, a solution that maximises survival plus wireheading would seem to solve that problem. Of course it may well just delay the inevitable heat death ending but if we choose to make that important, then sure, we can optimise for survival as well. I'm not sure that gets around the issue that any solution we produce (with or without optimisation for survival) is merely an elaborate way of satisfying our desires (in this case including the desire to continue to exist) and thus all FAI solutions are a form of wireheading.

Comment author: 15 October 2010 08:23:21PM -1 points [-]

When I say feel, I include:

I feel that is correct. I feel that is proved etc.

Regardless of the answer, it will ultimately involve our minds expressing a preference. We cannot escape our psychology. If our minds are deterministic computational machines within a universe without any objective value, all our goals are merely elaborate ways to make us feel content with our choices and a possibly inconsistent set of mental motivations. Attempting to model our psychology seems like the most efficient way to solve this problem. Is the idea that there is some other kind of answer? How would could it be shown to be legitimate?

I suspect that the desire for another answer is preventing practical progress in creating any meaningful solution. There are many problems and goals that would be relatively uncontroversial for an AI system to attempt to address. The outcome of the work need only be better than what we currently have to be useful we don't have to solve all problems before addressing some of them and indeed without attempting to address some of them I doubt we will make significant progress on the rest.

Comment author: 15 October 2010 08:42:40PM *  3 points [-]

If our minds are deterministic computational machines within a universe without any objective value, all our goals are merely elaborate ways to make us feel content with our choices and a possibly inconsistent set of mental motivations. Attempting to model our psychology seems like the most efficient way to solve this problem.

Which problem? You need to define which action should AI choose, in whatever problem it's solving, including the problems that are not humanly comprehensible. This is naturally done in terms of actual humans with all their psychology (as the only available source of sufficiently detailed data about what we want), but it's not at all clear in what way you'd want to use (interpret) that human data.

"Attempting to model psychology" doesn't answer any questions. Assume you have a proof-theoretic oracle and a million functioning uploads living in a virtual world however structured, so that you can run any number of experiments involving them, restart these experiments, infer the properties of whole infinite collections of such experiments and so on. You still won't know how to even approach creating a FAI.

Comment author: 15 October 2010 09:07:47PM 0 points [-]

If there is an answer to the problem of creating an FAI, it will result from a number of discussions and ideas that lead a set of people to agreeing that a particular course of action is a good one. By modelling psychology it will be possible to determine all the ways this can be done. The question then is why choose one over any of the others? As soon as one is chosen it will work and everyone will go along with it. How could we rate each one? (they would all be convincing by definition). Is it meaningful to compare them? Is the idea that there is some transcendent answer that is correct or important that doesn't boil down to what is convincing to people?

Comment author: 15 October 2010 09:23:14PM *  2 points [-]

Understanding the actual abstract reasons for agents' decisions (such as decisions about agreeing with a given argument) seems to me a promising idea, I'm trying to make progress on that (agents' decisions don't need to be correct or well-defined on most inputs for the reasons behind their more well-defined behaviors to lead the way to figuring out what to do in other situations or what should be done where the agents err). Note that if you postulate an algorithm that makes use of humans as its elements, you'd still have the problems of failure modes, regret for bad design decisions and of the capability to answer humanly incomprehensible questions, and these problems need to be already solved before you start the thing up.

Comment author: 15 October 2010 10:28:50PM 0 points [-]

Interesting, if I understand correctly the idea is to find a theoretically correct basis for deciding on a course of action given existing knowledge and then to make this calculation efficient and then direct towards a formally defined objective.

As distinct from a system which potentially sub optimally, attempts solutions and tries to learn improved strategies. i.e. one in which the theoretical basis for decision making is ultimately discovered by the agent over time (e.g. as we have done with the development of probability theory). I think the perspective I'm advocating is to produce a system that is more like an advanced altruistic human (with a lot of evolutionary motivations removed) than a provably correct machine. Ideally such a system could itself propose solutions to the FAI problem that would be convincing, as a result of an increasingly sophisticated understanding of human reasoning and motivations.

I realise there is a fear that such a system could develop convincing yet manipulative solutions. However the output need only be more trustworthy than a human's response to be legitimate (for example based on an analysis of its reasoning algorithm it appears to lack a Machiavellian capability, unlike humans).

Or put another way, can a robot Vladimir (Eliezer etc.) be made that solves the problem faster than their human counterparts do. And is there any reason to think this process is less safe (particularly when AI developments will continue regardless)?

Comment author: 15 October 2010 10:58:11PM 2 points [-]

Interesting, if I understand correctly the idea is to find a theoretically correct basis for deciding on a course of action given existing knowledge and then to make this calculation efficient and then direct towards a formally defined objective.

Yes, but there is only one top-level objective, to do the right thing, so one doesn't need to define an objective separately from the goal system itself (and improving state of knowledge is just another thing one can do to accomplish the goal, so again not a separate issue).

FAI really stands for a method of efficient production of goodness, as we would want it produced, and there are many landmines on this path, in particular humanity in its current form doesn't seem to be able to retain its optimization goal in the long run, and the same applies to most obvious hacks that don't have explicit notions of preference, such as upload societies. It's not just a question of speed, but also of ability to retain the original goal after quadrillions of incompletely understood self-modifications.