Comment author: marks 01 June 2010 03:49:28AM *  1 point [-]

I think there is definitely potential to the idea, but I don't think you pushed the analogy quite far enough. I can see an analogy between what is presented here and human rights and to Kantian moral philosophy.

Essentially, we can think of human rights as being what many people believe to be an essential bare-minimum conditions on human treatment. I.e. that the class of all "good and just" worlds everybody's human rights will be respected. Here human rights corresponds to the "local rigidity" condition of the subgraph. In general, too, human rights are generally only meaningful for people one immediately interacts with in your social network.

This does simplify the question of just government and moral action in the world (as political philosophers are so desirous of using such arguments). I don't think, however, that the local conditions for human existence are as easy to specify as in the case of a sensor network graph.

In some sense there is a tradition largely inspired by Kant that attempts to do the moral equivalent of what you are talking about: use global regularity conditions (on morals) to describe local conditions (on morals: say the ability to will a moral decision to a universal law). Kant generally just assumed that these local conditions would achieve the necessary global requirements for morality (perhaps this is what he meant by a Kingdom of Ends). For Kant the local conditions on your decision-making were necessary and sufficient conditions for the global moral decision-making.

In your discussion (and in the approach of the paper), however, the local conditions placed (on morals or on each patch) are not sufficient to achieve the global conditions (for morality, or on the embedding). So its a weakening of the approach advanced by Kant. The idea seems to be that once some aspects (but not all) of the local conditions have been worked out one can then piece together the local decision rules into something cohesive.

Edit: I rambled, so I put my other idea into another commend

Comment author: JoshuaZ 30 May 2010 09:05:56PM *  3 points [-]

Of the examples given, some of them certainly involve controled experiments in the classical sense. Evolutionary biology for example involves tests of genetic drift and speciation in the lab environment. For example, one matter that has been extensively tested in labs is different speciation mechanisms. The founder-effect mechanism is one that is particularly easy to test in a lab. For one major paper on the subject see this paper. A much older example is speciation by hybridization which has been tested in controlled lab environments for about a century now. The oldest I'm aware of in that regard is a 1912 paper by Digby (I haven't read it, and I'd have to go look up the citation but it shouldn't be hard to find ), and there have been many papers since then on the same topic.

Edit:Citation for Digby according to TOA is Digby, L. 1912. The cytology of Primula kewensis and of other related Primula hybrids. Ann. Bot. 26:357-388.

Comment author: marks 30 May 2010 10:21:00PM 1 point [-]

All the sciences mentioned above definitely do rely on controlled experimentation. But their central empirical questions are not amenable to being directly studied by controlled experimentation. We don't have multiple earths or natural histories upon which we can draw inference about the origins of species.

There is a world of difference between saying "I have observed speciation under these laboratory conditions" and "speciation explains observed biodiversity". These are distinct types of inferences. This of course does not mean that people who perform inference on natural history don't use controlled experiments: indeed they should draw on as much knowledge as possible about the mechanisms of the world in order to construct plausible theories of the past: but they can't run the world multiple times under different conditions to test their theories of the past in the way that we can test speciation.

Comment author: timtyler 30 May 2010 08:54:23PM *  0 points [-]

Natural experiments are experiments too. See:

http://en.wikipedia.org/wiki/Natural_experiment

http://en.wikipedia.org/wiki/Experiment

http://dictionary.reference.com/browse/experiment

I think the usage in the cited book is bad and unorthodox. E.g. one can still study storms experimentally - though nobody can completely control a storm.

Comment author: marks 30 May 2010 10:09:34PM 2 points [-]

I think we are talking past each other. I agree that those are experiments in a broad and colloquial use of the term. They aren't "controlled" experiments: which is a term that I was wanting to clarify (since I know a little bit about it). This means that they do not allow you to randomly assign treatments to experimental units which generally means that the risk of bias is greater (hence the statistical analysis must be done with care and the conclusions drawn should face greater scrutiny).

Pick up any textbook on statistical design or statistical analysis of experiments and the framework I gave will be what's in there for "controlled experimentation". There are other types of experiments. But these suffer from the problem that it can be difficult to sort out hidden causes. Suppose we want to know if the presence of A causes C (say eating meat causes heart disease). In an observational study we find units having trait A and those not (so find meat-eaters and vegetarians) and we then wait to observe response C. If we observe a response C in experimental units possessing trait A, its hard to know if A causes C or if there is some third trait B (present in some of the units) which causes both A and C.

In the case of a controlled experiment, A is now a treatment and not a trait of the units (so in this case you would randomly assign a carnivorous or vegetarian diet to people), thus we can randomly assign A to the units (and assume the randomization means that not every unit having hidden trait B will be given treatment A). In this case we might observe that A and C have no relation, whereas in the observational study we might. (For instance people who choose to be vegetarian may be more focused on health)

An example of how econometricians have dealt with "selection bias" or the fact that observation studies fail to have certain nice properties of controlled experiments is here

Comment author: timtyler 30 May 2010 03:20:42PM *  2 points [-]

I think that is a non-standard interpretation of the terminology:

"A controlled experiment generally compares the results obtained from an experimental sample against a control sample, which is practically identical to the experimental sample except for the one aspect whose effect is being tested (the independent variable)."

http://en.wikipedia.org/wiki/Experiment#Controlled_experiments

It never says the control sample has been influenced by the experimenter. It could instead be chosen by the experimenter - from the available spectrum of naturally-occurring phenomena.

Comment author: marks 30 May 2010 08:21:57PM 0 points [-]

I think it's standard in the literature: "The word experiment is used in a quite precise sense to mean an investigation where the system under study is under the control of the investigator. This means that the individuals or material investigated, the nature of the treatments or manipulations under study and the measurement procedures used are all settled, in their important features at least, by the investigator." The theory of the design of experiments

To be sure there are geological experiments where one, say, takes rock samples and subjects various samples to a variety of treatments, in order to simulate potential natural processes. But there is another chunk of the science which is meant to describe the Earth's geological history and for a controlled experiment on that you would need to control the natural forces of the Earth and to have multiple Earths.

The reason why one needs to control an experiment (this is a point elaborated on at length in Cox and Reid) is in order to prevent bias. Take the hypothesis of continental drift. We have loads of "suspicious coincidences" that suggest continental drift (such as similar fossils on different landmasses, certain kinds of variations in the magnetic properties of the seafloor, the fact that the seafloor rocks are much younger than land rocks, earthquake patterns/fault-lines). Critically, however, we don't have an example of an earth that doesn't have continental drift. It is probably the case that some piece of "evidence" currently used to support the theory of continental drift will turn out to be a spurious correlation. Its very difficult to test for these because of the lack of control. The fact that we are almost certainly on a continental-drifting world biases us towards think that some geological phenomenon is caused by drift even when they not.

Comment author: timtyler 30 May 2010 03:04:02PM 0 points [-]

Surely those have "controlled experimentation".

Comment author: marks 30 May 2010 03:13:16PM 2 points [-]

Those sciences are based on observations. Controlled experimentation requires that you have some set of experimental units to which you randomly assign treatments. With geology, for instance, you are trying to figure out the structure of the Earth's crust (mostly). There are no real treatments that you apply, instead you observe the "treatments" that have been applied by the earth to the earth. I.e. you can't decide which area will have a volcano, or an earthquake: you can't choose to change the direction of a plate or change the configuration of the plates: you can't change the chemical composition of the rock under large scale, etc.

All one can do is carefully collect measurements, build models of them, and attempt to create a cohesive picture that explains the phenomena. Control implies that you can do more than just collect measurements.

Comment author: snarles 24 May 2010 09:58:03AM *  0 points [-]

Indeed, the truth of the matter is that I would be interested in contributing to SIAI, but at the moment I am still not convinced that it would be a good use of my resources. My other objections still haven't been satisfied, but here's another argument. As usual, I don't personally commit to what I claim, since I don't have enough knowledge to discuss anything in this area with certainty.

The main thing this community seems to lack when discussing Singularity is a lack of political savvy. The primary forces that shape history are, and quite likely, will always be economic and political motives, rather than technology. Technology and innovation are expensive, and innovators require financial and social motivation to create. This applies superlinearly for projects that are so large as to require collaboration.

General AI is exactly that sort of project. There is no magic mathematical insight that will enable us to write a program in a hundred lines of code that will allow it to improve itself in any reasonable amount of time. I'm sure Eliezer is aware of the literature on optimization processes, but the no free lunch principle and the practical randomness of innovation mean that an AI seeking to self-improve can only do so with an (optimized) random search. Humans essentially do the same thing, except we have knowledge and certain built-in processes to help us constrain the search space (but this also makes us miss certain obvious innovations.) To make GAI a real threat, you have to give it enough knowledge so that it can understand the basics of human behavior, or enough knowledge to learn more on its own from human-created resources. This is highly specific information which would take a fully general learning agent a lot of cycles to infer unless it were fed the information, in a machine-friendly form.

Now we will discuss the political and economic aspects of GAI. Support of general artificial intelligence is a political impossibility, because general AI, by definition, is a threat to the jobs of voters. By the time GAI becomes remotely viable, a candidate supporting a ban of GAI will have nearly universal support. It is impossible even to defend GAI on the grounds that the research it produces could save lives, because no medical researcher will welcome a technology that does their job for them. The same applies to any professional. There is a worry on this site that people underestimate GAI, but far more likely is that GAI or anything remotely like it is vastly overestimated as a threat.

The economic aspects are similar. GAI is vastly more costly to develop (for reasons I've outlined), and doesn't provide many advantages over expert systems. Besides, no company is going to produce a self-improving tool in the first place, because nobody, in theory, would ever have to buy an upgraded version.

These political and economic forces are a powerful retardant against the possibility a General AI catastrophe, and have more heft than any focused organization like SIAI could ever have. Yet much like Nader spoiling Al Gore's vote, the minor influence of SIAI might actually weaken rather than reinforce these protective forces. By claiming to have the tools in place to implement the strategically named 'friendly AI', SIAI might in fact assuage public worries about AI. Even if the organization itself does not take actions to do so, GAI advocates will be able to exaggerate the safety of friendly AI and point out that 'experts have already developed Friendly AI guidelines' in press releases. And by developing the framework to teach machines about human behavior, SIAI lowers the cost for any enterprise that for some reason, is interested in developing GAI.

At this point, I conclude my hypothetical argument. But I have realized that it is now my true position that SIAI should make it a clear position that: if tenable, NO general AI is preferable to friendly AI. (Back to no-accountability mode: it may be that general AI will eventually come, but by the point it will have become an eventuality, the human race will be vastly more prepared than it is now to deal with such an agent on an equal footing.)

Comment author: marks 25 May 2010 10:05:03AM 1 point [-]

Bear in mind that the people who used steam engines to make money didn't make it by selling the engines: rather, the engines were useful in producing other goods. I don't think that the creators of a cheap substitute for human labor (GAI could be one such example) would be looking to sell it necessarily. They could simply want to develop such a tool in order to produce a wide array of goods at low cost.

I may think that I'm clever enough, for example, to keep it in a box and ask it for stock market predictions now and again. :)

As for the "no free lunch" business, while its true that any real-world GAI could not efficiently solve every induction problem, it wouldn't need to either for it to be quite fearsome. Indeed being able to efficiently solve at least the same set of induction problems that humans solve (particularly if its in silicon and the hardware is relatively cheap) is sufficient to pose a big threat (and be potentially quite useful economically).

Also, there is a non-zero possibility that there already exists a GAI and its creators, decided the safest, most lucrative, and beneficial thing to do is set the GAI on designing drugs: thereby avoiding giving the GAI too much information about the world. The creators could have then set up a biotech company that just so happens to produce a few good drugs now and again. Its kind of like how automated trading came from computer scientists and not the currently employed traders. I do think its unlikely that somebody working in medical research is going to develop GAI least of all because of the job threat. The creators of a GAI are probably going to be full time professionals who are are working on the project.

Comment author: timtyler 23 May 2010 05:20:03PM 0 points [-]

Surely Peter Norvig never said that!

Comment author: marks 23 May 2010 11:05:20PM 1 point [-]

Go to 1:00 minute here

"Building the best possible programs" is what he says.

Comment author: Daniel_Burfoot 23 May 2010 03:53:41PM 3 points [-]

I think there is a science of intelligence which (in my opinion) is closely related to computation, biology, and production functions (in the economic sense).

Interesting that you're taking into account the economic angle. Is it related to Eric Baum's ideas (e.g. "Manifesto for an evolutionary economics of intelligence")?

The difficulty is that there is much debate as to what constitutes intelligence: there aren't any easily definable results in the field of intelligence nor are there clear definitions.

Right, so in Kuhnian terms, AI is in a pre-paradigm phase where there is no consensus on definitions or frameworks, and so normal science cannot occur. That implies to me that people should spend much more time thinking about candidate paradigms and conceptual frameworks, and less time doing technical research that is unattached to any paradigm (or attached to a candidate paradigm that is obviously flawed).

Comment author: marks 23 May 2010 05:09:29PM 2 points [-]

It actually comes from Peter Norvig's definition that AI is simply good software, a comment that Robin Hanson made: , and the general theme of Shane Legg's definitions: which are ways of achieving particular goals.

I would also emphasize that the foundations of statistics can (and probably should) be framed in terms of decision theory (See DeGroot, "Optimal Statistical Decisions" for what I think is the best book on the topic, as a further note the decision-theoretic perspective is neither frequentist nor Bayesian: those two approaches can be understood through decision theory). The notion of an AI as being like an automated statistician captures at least the spirit of how I think about what I'm working on and this requires fundamentally economic thinking (in terms of the tradeoffs) as well as notions of utility.

Comment author: timtyler 23 May 2010 07:01:47AM *  3 points [-]

Re: there aren't any easily definable results in the field of intelligence nor are there clear definitions.

There are pretty clear definitions: http://www.vetta.org/definitions-of-intelligence/

Comment author: marks 23 May 2010 05:00:18PM 0 points [-]

The fact that there are so many definitions and no consensus is precisely the unclarity. Shane Legg has done us all a great favor by collecting those definitions together. With that said, his definition is certainly not the standard in the field and many people still believe their separate definitions.

I think his definitions often lack an understanding of the statistical aspects of intelligence, and as such they don't give much insight into the part of AI that I and others work on.

Comment author: marks 23 May 2010 03:48:17AM 1 point [-]

I think there is a science of intelligence which (in my opinion) is closely related to computation, biology, and production functions (in the economic sense). The difficulty is that there is much debate as to what constitutes intelligence: there aren't any easily definable results in the field of intelligence nor are there clear definitions.

There is also the engineering side: this is to create an intelligence. The engineering is driven by a vague sense of what an AI should be, and one builds theories to construct concrete subproblems and give a framework for developing solutions.

Either way this is very different than astrophysics where one is attempting to: say, explain the motions of the heavenly sphere: which have a regularity, simplicity, and clarity to them that is lacking in any formulation of the AI problem.

I would say that AI researchers do formulate theories about how to solve particular engineering problems for AI systems, and then they test them out by programming them (hopefully). I suppose I count, and that's certainly what I and my colleagues do. Most papers in my fields of interest (machine learning and speech recognition) usually include an "experiments" section. I think that when you know a bit more about the actually problems AI people are solving you'll find that quite a bit of progress has been achieved since the 1960's.

View more: Prev | Next