Request for help with economic analysis related to AI forecasting

6 ESRogs 06 February 2016 01:27AM

[Cross-posted from FB]

I've got an economic question that I'm not sure how to answer.

I've been thinking about trends in AI development, and trying to get a better idea of what we should expect progress to look like going forward.

One important question is: how much do existing AI systems help with research and the development of new, more capable AI systems?

The obvious answer is, "not much." But I think of AI systems as being on a continuum from calculators on up. Surely AI researchers sometimes have to do arithmetic and other tasks that they already outsource to computers. I expect that going forward, the share of tasks that AI researchers outsource to computers will (gradually) increase. And I'd like to be able to draw a trend line. (If there's some point in the future when we can expect most of the work of AI R&D to be automated, that would be very interesting to know about!)

So I'd like to be able to measure the share of AI R&D done by computers vs humans. I'm not sure of the best way to measure this. You could try to come up with a list of tasks that AI researchers perform and just count, but you might run into trouble as the list of tasks to changes over time (e.g. suppose at some point designing an AI system requires solving a bunch of integrals, and that with some later AI architecture this is no longer necessary).

What seems more promising is to abstract over the specific tasks that computers vs human researchers perform and use some aggregate measure, such as the total amount of energy consumed by the computers or the human brains, or the share of an R&D budget spent on computing infrastructure and operation vs human labor. Intuitively, if most of the resources are going towards computation, one might conclude that computers are doing most of the work.

Unfortunately I don't think that intuition is correct. Suppose AI researchers use computers to perform task X at cost C_x1, and some technological improvement enables X to be performed more cheaply at cost C_x2. Then, all else equal, the share of resources going towards computers will decrease, even though their share of tasks has stayed the same.

On the other hand, suppose there's some task Y that the researchers themselves perform at cost H_y, and some technological improvement enables task Y to be performed more cheaply at cost C_y. After the team outsources Y to computers the share of resources going towards computers has gone up. So it seems like it could go either way -- in some cases technological improvements will lead to the share of resources spent on computers going down and in some cases it will lead to the share of resources spent on computers going up.

So here's the econ part -- is there some standard economic analysis I can use here? If both machines and human labor are used in some process, and the machines are becoming both more cost effective and more capable, is there anything I can say about how the expected share of resources going to pay for the machines changes over time?

[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning

14 ESRogs 27 January 2016 09:04PM

DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.

 

Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history

[...]

But one game has thwarted A.I. research thus far: the ancient game of Go.


[LINK] Deep Learning Machine Teaches Itself Chess in 72 Hours

8 ESRogs 14 September 2015 07:38PM

Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. [...] His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

[...]

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

[...]

One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

[...]

Ref: arxiv.org/abs/1509.01549 : Giraffe: Using Deep Reinforcement Learning to Play Chess

http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/

 

H/T http://lesswrong.com/user/Qiaochu_Yuan

[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim

7 ESRogs 19 August 2015 06:37AM

This seems significant:

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. 

...

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed

...

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

...

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

...

If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

...

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

http://www.theguardian.com/science/2015/aug/18/first-almost-fully-formed-human-brain-grown-in-lab-researchers-claim

 

 

[Link] Neural networks trained on expert Go games have just made a major leap

15 ESRogs 02 January 2015 03:48PM

From the arXiv:

Move Evaluation in Go Using Deep Convolutional Neural Networks

Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver

The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.

This approach looks like it could be combined with MCTS. Here's their conclusion:

In this work, we showed that large deep convolutional neural networks can predict the next move made by Go experts with an accuracy that exceeds previous methods by a large margin, approximately matching human performance. Furthermore, this predictive accuracy translates into much stronger move evaluation and playing strength than has previously been possible. Without any search, the network is able to outperform traditional search based programs such as GnuGo, and compete with state-of-the-art MCTS programs such as Pachi and Fuego.

In Figure 2 we present a sample game played by the 12-layer CNN (with no search) versus Fuego (searching 100K rollouts per move) which was won by the neural network player. It is clear that the neural network has implicitly understood many sophisticated aspects of Go, including good shape (patterns that maximise long term effectiveness of stones), Fuseki (opening sequences), Joseki (corner patterns), Tesuji (tactical patterns), Ko fights (intricate tactical battles involving repeated recapture of the same stones), territory (ownership of points), and influence (long-term potential for territory). It is remarkable that a single, unified, straightforward architecture can master these elements of the game to such a degree, and without any explicit lookahead.

On the other hand, we note that the network still has weaknesses: notably it sometimes fails to understand the global picture, behaving as if the life and death status of large groups has been incorrectly assessed. Interestingly, it is precisely these global aspects of the game for which Monte-Carlo search excels, suggesting that these two techniques may be largely complementary. We have provided a preliminary proof-of-concept that MCTS and deep neural networks may be combined effectively. It appears that we now have two core elements that scale effectively with increased computational resource: scalable planning, using Monte-Carlo search; and scalable evaluation functions, using deep neural networks. In the future, as parallel computation units such as GPUs continue to increase in performance, we believe that this trajectory of research will lead to considerably stronger programs than are currently possible.

H/T: Ken Regan

Edit -- see also: Teaching Deep Convolutional Neural Networks to Play Go (also published to the arXiv in December 2014), and Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time (MIT Technology Review article)

[LINK] Attention Schema Theory of Consciousness

3 ESRogs 25 August 2013 10:30PM

I found this theory pretty interesting, and it reminded me of Gary Drescher's explanation of consciousness in Good and Real:

How the light gets out

Consciousness is the ‘hard problem’, the mystery that confounds scientists and philosophers. Has a new theory cracked it?

[...]

Attention requires control. In the modern study of robotics there is something called control theory, and it teaches us that, if a machine such as a brain is to control something, it helps to have an internal model of that thing. Think of a military general with his model armies arrayed on a map: they provide a simple but useful representation — not always perfectly accurate, but close enough to help formulate strategy. Likewise, to control its own state of attention, the brain needs a constantly updated simulation or model of that state. Like the general’s toy armies, the model will be schematic and short on detail. The brain will attribute a property to itself and that property will be a simplified proxy for attention. It won’t be precisely accurate, but it will convey useful information. What exactly is that property? When it is paying attention to thing X, we know that the brain usually attributes an experience of X to itself — the property of being conscious, or aware, of something. Why? Because that attribution helps to keep track of the ever-changing focus of attention.

I call this the ‘attention schema theory’. It has a very simple idea at its heart: that consciousness is a schematic model of one’s state of attention. Early in evolution, perhaps hundreds of millions of years ago, brains evolved a specific set of computations to construct that model. At that point, ‘I am aware of X’ entered their repertoire of possible computations.

- Princeton neuroscientist, Michael Graziano, writing in Aeon Magazine.

[LINK] Well-written article on the Future of Humanity Institute and Existential Risk

16 ESRogs 02 March 2013 12:36PM

This introduction to the concept of existential risk is perhaps the best such article I've read targeted at a general audience.  It manages to cover a lot of ground in a way that felt engaging to me and that I think would carry along many readers who are intellectually curious but may not yet have had exposure to all of the related prerequisite ideas.

 

Omens: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?

Sometimes, when you dig into the Earth, past its surface and into the crustal layers, omens appear. In 1676, Oxford professor Robert Plot was putting the final touches on his masterwork, The Natural History of Oxfordshire, when he received a strange gift from a friend. The gift was a fossil, a chipped-off section of bone dug from a local quarry of limestone. Plot recognised it as a femur at once, but he was puzzled by its extraordinary size. The fossil was only a fragment, the knobby end of the original thigh bone, but it weighed more than 20 lbs (nine kilos). It was so massive that Plot thought it belonged to a giant human, a victim of the Biblical flood. He was wrong, of course, but he had the conceptual contours nailed. The bone did come from a species lost to time; a species vanished by a prehistoric catastrophe. Only it wasn’t a giant. It was a Megalosaurus, a feathered carnivore from the Middle Jurassic.

Plot’s fossil was the first dinosaur bone to appear in the scientific literature, but many have followed it, out of the rocky depths and onto museum pedestals, where today they stand erect, symbols of a radical and haunting notion: a set of wildly different creatures once ruled this Earth, until something mysterious ripped them clean out of existence.

[...]

There are good reasons for any species to think darkly of its own extinction. Ninety-nine percent of the species that have lived on Earth have gone extinct, including more than five tool-using hominids.

[...]

Bostrom isn’t too concerned about extinction risks from nature. Not even cosmic risks worry him much, which is surprising, because our starry universe is a dangerous place.

[discussion of threats of supernovae, asteroid impacts, supervolcanoes, nuclear weapons, bioterrorism ...]

These risks are easy to imagine. We can make them out on the horizon, because they stem from foreseeable extensions of current technology. [...] Bostrom’s basic intellectual project is to reach into the epistemological fog of the future, to feel around for potential threats. It’s a project that is going to be with us for a long time, until — if — we reach technological maturity, by inventing and surviving all existentially dangerous technologies.

There is one such technology that Bostrom has been thinking about a lot lately. Early last year, he began assembling notes for a new book, a survey of near-term existential risks. After a few months of writing, he noticed one chapter had grown large enough to become its own book. ‘I had a chunk of the manuscript in early draft form, and it had this chapter on risks arising from research into artificial intelligence,’ he told me. ‘As time went on, that chapter grew, so I lifted it over into a different document and began there instead.’

[very good introduction to the threat of superintelligent AI, touching on the alienness of potential AI goals, the complexity of specifying human value, the dangers of even Oracle AI, and techniques for keeping an AI in a box, with the key quotes including, "To understand why an AI might be dangerous, you have to avoid anthropomorphising it." and, "The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage." ...]

One night, over dinner, Bostrom and I discussed the Curiosity Rover, the robot geologist that NASA recently sent to Mars to search for signs that the red planet once harbored life. The Curiosity Rover is one of the most advanced robots ever built by humans. It functions a bit like the Terminator. It uses a state of the art artificial intelligence program to scan the Martian desert for rocks that suit its scientific goals. After selecting a suitable target, the rover vaporises it with a laser, in order to determine its chemical makeup. Bostrom told me he hopes that Curiosity fails in its mission, but not for the reason you might think.

It turns out that Earth’s crust is not our only source of omens about the future. There are others to consider, including a cosmic omen, a riddle written into the lifeless stars that illuminate our skies. But to glimpse this omen, you first have to grasp the full scope of human potential, the enormity of the spatiotemporal canvas our species has to work with. You have to understand what Henry David Thoreau meant when he wrote, in Walden (1854), ‘These may be but the spring months in the life of the race.’ You have to step into deep time and look hard at the horizon, where you can glimpse human futures that extend for trillions of years.

[introduction to the idea of the Great Filter, and also that fighting existential risk is about saving all future humans and not just those alive at the time of any particular potential catastrophe ...]

As Bostrom and I strolled among the skeletons at the Museum of Natural History in Oxford, we looked backward across another abyss of time. We were getting ready to leave for lunch, when we finally came upon the Megalosaurus, standing stiffly behind display glass. It was a partial skeleton, made of shattered bone fragments, like the chipped femur that found its way into Robert Plot’s hands not far from here. As we leaned in to inspect the ancient animal’s remnants, I asked Bostrom about his approach to philosophy. How did he end up studying a subject as morbid and peculiar as human extinction?

He told me that when he was younger, he was more interested in the traditional philosophical questions. He wanted to develop a basic understanding of the world and its fundamentals. He wanted to know the nature of being, the intricacies of logic, and the secrets of the good life.

‘But then there was this transition, where it gradually dawned on me that not all philosophical questions are equally urgent,’ he said. ‘Some of them have been with us for thousands of years. It’s unlikely that we are going to make serious progress on them in the next ten. That realisation refocused me on research that can make a difference right now. It helped me to understand that philosophy has a time limit.’

H/T - gwern

The Center for Sustainable Nanotechnology

4 ESRogs 26 February 2013 06:55AM

Those concerned about existential risks may be interested to learn that, as of last September, the National Science Foundation is funding a Center for Sustainable Nanotechnology.  Though I haven't yet seen anywhere where they explicitly characterize nanotechnology as an existential threat to humanity (they seem mostly to be concerned with the potential hazards of nanoparticle pollution, rather than any kind of grey goo scenario), I was still pleased to discover that this group exists. 

Here is how they describe themselves on their main page:

The Center for Sustainable Nanotechnology is a multi-institutional partnership devoted to investigating the fundamental molecular mechanisms by which nanoparticles interact with biological systems.

...

While nanoparticles have a great potential to improve our society, relatively little is yet known about how nanoparticles interact with organisms, and how the unintentional release of nanoparticles from consumer or industrial products might impact the environment.

The goal of the Center for Sustainable Nanotechnology is to develop and utilize a molecular-level understanding of nanomaterial-biological interactions to enable development of sustainable, societally beneficial nanotechnologies. In effect, we aim to understand the molecular-level chemical and physical principles that govern how nanoparticles interact with living systems, in order to provide the scientific foundations that are needed to ensure that continued developments in nanotechnology can take place with the minimal environmental footprint and maximum benefit to society.

...

Funding for the CSN comes from the National Science Foundation Division of Chemistry through the Centers for Chemical Innovation Program.

And on their public outreach website:

Our “center” is actually a group of people who care about our environment and are doing collaborative research to help ensure that our planet will be habitable hundreds of years from now – in other words, that the things we do every day as humans will be sustainable in the long run.

Now you’re probably wondering what that has to do with nanotechnology, right? Well, it turns out that nanoparticles – chunks of materials around 10,000 times smaller than the width of a human hair – may provide new and important solutions to many of the world’s problems. For example, new kinds of nanoparticle-based solar cells are being made that could, in the future, be painted onto the sides of buildings.

...

What’s the (potential) problem? Well, these tiny little chunks of materials are so small that they can move around and do things in ways that we don’t fully understand. For example, really tiny particles could potentially be absorbed through skin. In the environment, nanoparticles might be able to be absorbed into insects or fish that are at the bottom of the food chain for larger animals, including us.

Before nanoparticles get incorporated into consumer products on a large scale, it’s our responsibility to figure out what the downsides could be if nanoparticles were accidentally released into the environment. However, this is a huge challenge because nanoparticles can be made out of different stuff and come in many different sizes, shapes, and even internal structures.

Because there are so many different types of nanoparticles that could be used in the future, it’s not practical to do a lot of testing of each kind. Instead, the people within our center are working to understand what the “rules of behavior” are for nanoparticles in general. If we understand the rules, then we should be able to predict what different types of nanoparticles might do, and we should be able to use this information to design and make new, safer nanoparticles.

In the end, it’s all about people working together, using science to create a better, safer, more sustainable world. We hope you will join us!