Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: turchin 04 July 2017 10:27:44AM *  1 point [-]

Sure, but it is not easy to prove in each case. For example, if an AI increases its hardware speed two times, and buys two times more hardware, its total productivity would grow 4 times. But we can't say that the first improvement was done because of the second.

However, if it got an idea that improving hardware is useful, it is a recursive act, as this idea helps further improvements. Moreover, it opens the field of other ideas, like the improvement of improvement. That is why I say that true recursivity is happening on ideas level, but not on the hardware level.

Comment author: whpearson 04 July 2017 04:39:53PM 0 points [-]

As the resident lesswrong "Won't someone think of the hardware" person this comment rubs me up the wrong way a fair bit.

First there is not a well defined thing as hardware speed. Hardware speed might refer to various things clock speed, operations per second, memory bandwidth, memory response times. Depending on what your task is, your productivity might be bottle necked by one of these things and not the other. Some things like memory response times are due to the speed of signals traversing the mother board and are hard to improve while we still have the separation of memory and processing power.

Getting twice the hardware might less than twice the improvement. If there is some serial process then amdhal's law comes into effect. If the different nodes need to make sure they have a consistent view of something you need to add latency so that sufficient numbers of them can have a good state of the data with a consensus algorithm.

Your productivity might be bottle necked by external factors not processing power at all (not getting data fast enough). This is my main beef about the sped up people thought experiment. The world is moving glacially for them and data is coming in at a trickle.

If you are searching a space, and you add more compute you might be searching less promising areas with the new compute, so you might not get twice the productivity.

I really would not expect twice the compute to lead to twice the productivity except in the most embarrassingly parallel situation like computing hashes.

I think your greater point is weakened, but not by much. We have lots of problems trying to distribute and work on problems together, so human intelligence is not purely additive either.

Comment author: whpearson 03 July 2017 08:51:12PM 1 point [-]

I've decided to work on a book while I also work on the computer architecture. It pulls together a bunch of threads of thinking I've had around the subject of autonomy. Below is the TL; DR. If lots of peopled are interested I can try and blogify it. If not many people are I might seek your opinions on drafts.


We are entering an age where questions of autonomy become paramount. We have created computers with a certain amount of autonomy and are exploring how to give more autonomy to them. We simultaneously think that autonomous computers are overhyped and that autonomous computers (AI) could one day take over the earth.

The disconnect in views is due to a choice made early in computing's history that requires a programmer or administrator to look after a computer by directly installing programs and stopping and removing bad programs. The people who are worried about AI are worried that the computers will become more autonomous and no longer need an administrator. People embedded in computing cannot see how this would happen as computers, as they stand, still require someone to control the administrative function and we are not moving towards administrative autonomy.

Can we build computer systems that are administratively autonomous? Administration can be seen as a resource allocation problem, with an explicit administrator serving the same role as a dictator in a command economy. An alternative computer architecture is presented that relies on a market based allocation of resources to programs on based on human feedback. This architecture if realized would allow programs to experiment with new programs in the machine and would lead to a more efficient adaptive computer that didn’t need an explicit administrator. Instead it would be trained by a human.

However making computers more autonomous can either lead to more autonomy for each of us by helping us or it could lead to computers being completely autonomous and us at their mercy. Ensuring the correct level of autonomy in the relationship between computers and people should be a top priority.

The question of more autonomy for humans is a also a tricky one. On the one hand it would allow us to explore the stars and safeguard us from corrupt powers. On the other hand more autonomy for humans might lead to more wars and existential risks due to the increase in destructive powers of individuals and decrease in interdependence.

Autonomy is currently ill defined. It is not an all or nothing affair. During this discussion what we mean by autonomy will be broken down, so that we can have a better way of discussing it and charting our path to the future.

Comment author: madhatter 01 July 2017 02:11:32AM 0 points [-]

Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.

Comment author: whpearson 01 July 2017 09:44:21AM 0 points [-]

Did you see this thread on making an on-line course. It is probably a place to co-ordinate this sort of thing.

Comment author: whpearson 30 June 2017 07:29:49AM 3 points [-]

I think that bayes with resource constraints looks a lot like toolbox-ism.

Take for example the problem of reversing MD5 hashes. You could bayesianly update your probabilities of which original string goes to which hash, but computationally there is no uncertainty so there is no point storing probabilities, so you strip those out. Or you could just download the tool of a rainbow table and use that and not have to compute the hashes yourself at all.

Or having to compute the joint probability of two sets of data that comes in at different times. Say you always have the event A at time T, but what happened to B at T can come in immediately or days to weeks later. You might not be able to store all the information about A if it is happening quickly (e.g. network interface activity). So you have to chose which data to drop. Perhaps you age it out after a certain time or drop things that are usual, but they still stop you being able to bayesianly update properly when that B comes in. Bayes doesn't say what you should drop. You should drop the unimportant stuff, but figuring out what is unimportant a priori seems implausible.

So I still use bayes upon occasion when it makes sense and still think AI could be highly risky. But I think I'm a toolbox-ist.

Comment author: turchin 20 June 2017 09:03:51PM 0 points [-]

If it makes sense to continue adding letters to different risks, l-risks could be identified, that is the risks that kill all life on earth. The main difference for us, humans, that there are zero chances of the new civilisation of Earth in that case.

But y-risks term is free. What could it be?

Comment author: whpearson 20 June 2017 10:56:03PM 1 point [-]

The risk that we think about risks too much and never do anything interesting?

Comment author: whpearson 20 June 2017 10:50:23PM 3 points [-]

Interesting to see another future philosophy.

I think my own rough future philosophy is making sure that the future has an increase in autonomy for humanity. I think it transforms into S-risk reduction assuming that autonomous people will chose to reduce their suffering and their potential future suffering if they can. It also transforms the tricky philosophical question of defining suffering into the tricky philosophical question of defining autonomy, that might be trade that is preferred.

I think I prefer the autonomy increase because I do not have to try and predict the emotional reaction of humans/agents to events. People could claim immense suffering to seeing me wearing bright/clashing clothing. But if I leave them and their physical environment alone I'm not decreasing their autonomy. It also suggests positive things to do (Give directly) rather than just avoidance of low autonomy outcomes.

There is however tension between the individual increase in autonomy and the increase in autonomy of the society.

Comment author: Thomas 20 June 2017 06:25:12AM 0 points [-]

Almost every flying machine innovator was quite public about his goal. And there were a lot of them. Still, a dark horse won.

Here, the situation is quite similar, except that a dark horse victory is not very likely.

If Google is unable to improve its Deep/Alpha product line to an effective AGI machine in a less than 10 years, they are either utterly incompetent (which they aren't) or this NN paradigm isn't strong enough. Which sounds unlikely, too.

Others have less than 10 years wide opportunity window.

I am not too excited about the amount of CPU/RAM requirements for this NN/ML style of racing. But it might be just good enough.

Comment author: whpearson 20 June 2017 08:07:48AM 0 points [-]

I think NN is strong enough for ML I just think that ML is the wrong paradigm. It is at best a partial answer, it does not capture a class of things that humans do that I think is important.

Mathematically ML is trying to find a function from input to output. There are things we do that do not fall into that in our language processing. A couple of examples.

  1. Attentional phrases: "This is important, pay attention," this means that you should devote more mental energy to processing/learning whatever is happening around you. To learn to process this kind of phrase, you would have to be able to create a map of input to some form of attention control. This form of attention control has not been practised in ML, it is assumed that if data is being presented to the algorithm it is important data.

  2. Language about language: "The word for word in French is mot", this changes not only the internal state. But also the mapping of input to internal state (and mapping of input to the mapping of input to internal state). Processing it and other phrases would allow you to process the phrase "le mot à mot en allemand est wort". It is akin to learning to compiling down a new compiler.

You could maybe approximate both these tasks with a crazy hotchpotch of ML systems. But I think that that way is a blind alley.

Learning both of these abilities will have some ML involved. However Language is weird and we have not scratched the surface of how it interacts with learning.

I'd put some money on AGI being pretty different to current ML.

Comment author: turchin 19 June 2017 10:08:58PM 0 points [-]

Sometimes I think that there are fewer people who explicitly works on universal AGI than people who works on AI safety.

Comment author: whpearson 19 June 2017 10:56:20PM *  0 points [-]

I've got an article brewing on the incentives for people not to work on AGI.

Company incentives:

  1. They are making plenty of money with normal AI/dumb computers no need to go fancy.
  2. It is hard to monetise in the way companies are used to. No need of an upgrade cycle, the system maintains and upgrades itself. No expensive training required either, it trains itself to understand the users. Sell a person an AGI never sell them software again vs SaaS.
  3. For internal software companies optimise for simple software that they people can understand and get many people to maintain. There is a high activation energy required to go from simple software that people maintain to a complex system that can maintain itself.
  4. Legal minefield. Who has responsibility for an AGIs actions? The company or the user? This is solved if it can be sold as intelligence augmentation and is sold in a very raw state with little knowledge and is trained/given more responsibility by the user.

Programmer incentives:

  1. Programmers don't want to program themselves out of a job.
  2. Programmers also optimize for simple things that they can maintain/understand.

I'm guessing if ever it stops being easy to make money as a software company, then the other incentives might get overridden.

Comment author: turchin 19 June 2017 11:20:59AM 1 point [-]

How many teams are working on AGI in the world now? Do we have a list? (I asked already on facebook, but maybe I could get more input here.) https://www.facebook.com/groups/aisafety/permalink/849566641874118/

Comment author: whpearson 19 June 2017 09:40:07PM 0 points [-]

I would say not many at all! They might be working on something they call AGI but I think we need a change in view point before we can start making progress towards the important general aspect of it.

I think the closest people are the transfer learning people. They are at least trying something different. I think we need to solve the resource allocation problem first, then we can layer ML/language inside it. Nothing is truly general, general intelligences can devote resources to solving different problems at different time, and get knowledge of solving problems from other general intelligences.

Comment author: whpearson 16 June 2017 09:43:43AM *  4 points [-]

Any chance it could be called AGI Safety instead of AI safety? I think that getting us to consistently use that terminology would help people to know that we are worrying about something greater than current deep learning systems and other narrow AI (although investigating safety in these systems is a good stepping stone to the AGI work).

I'll help out how I can. I think these sorts of meta approaches are a great idea!

View more: Next