New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 12:13 AM

To the speed section, you might want to add examples of parallel learning. Parallelizing learning of robot arm manipulation, or parallel playing of Atari games, which are both (much) faster in terms of wallclock time and also can be more sample and resource efficient (A3C actually can be more sample-efficient than DQN with multiple independent agents, and it doesn't need to waste a great deal of RAM and computation on the experience replay).

The premise this article starts with is wrong. The argument goes that AIs can't take over the world, because they can't predict things much better than humans can. Or, conversely, that they will be able to take over because they can predict much better than humans.

Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans. Humans didn't evolve to be engineers or computer programmers. It's really just an accident we are capable of it. Humans have such a hard time designing complex systems, keeping track of so many different things in our head, etc. Already these jobs are restricted to unusually intelligent people.

I think there are many possible optimizations to the mind to improve at these kinds of tasks. There are rare humans that are very good at these tasks, showing that human brains aren't anywhere near the peak. An AI that is optimized for them, will be able to design technologies we can't even dream of. We could theoretically make nanotechnology today, but there are so many interacting parts and complexities, humans are just unable to manage it. The internet has so much bugged software running it. It could probably be pwned in a weekend by a sufficiently powerful programming AI.

And the same is perhaps true with designing better AI algorithms, an AI optimized towards AI research, would be much better at it than humans.

Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans.

The way I think of it, designing technology is a special case of prediction. E.g. to design a steam engine, you need to be able to predict how steam behaves in different conditions and whether, given some candidate design, the pressure from the steam will be transformed into useful work or not.

[-]gjm7y80

designing technology is a special case of prediction

It's possible to be very good at prediction but still rather bad at design. Suppose you have a black box that does physics simulations with perfect accuracy. Then you can predict exactly what will happen if you build any given thing, by asking the black box. But it won't, of itself, give you ideas about what things to ask it about, or understanding of why it produces the results it does beyond "that's how the physics works out".

(To be good at design you do, I think, need to be pretty good at prediction.)

I remember just a couple weeks ago,a paper from an AI convention. Researchers asked an AI to design a circuit board, and it included a few components not connected to the circuit at all, but they still assisted in the circuit functioning.

[-]gjm7y30

I think you're thinking of this. Would you like to be more explicit about its application here?

(To be good at design you do, I think, need to be pretty good at prediction.)

Then this whole endeavour is doomed, because part of the point of designing AGI is that we don't know what it'll do.

[-]gjm7y40

Those who speak of FAI generally understand by it that we should be able to predict various things about what an AI will do: e.g., it will not bring about a future in which all that we hold dear is destroyed.

Clearly that's difficult. It may be impossible. But your objection seems to apply equally to designing a chess-playing program with the intention that it will play much better chess than its creator, which is a thing that has been done successfully many times.

You can design things based on a priori prediction, but you don't have to in many cases...you can also use trial and error instead.

But AI taking over isn't the negative outcome we are trying to avoid...we are trying to avoid takeover by AIs that are badly misaligned with our values. What's the problem with an AI that runs complex technology in accordance with our values, better than us?

No it's not necessarily a negative outcome. I think it could go both ways, which is why I said it was "my greatest fear/hope".

If you're in the field of Computer Science and you've got a PhD in ML/AI you can expect hefty (and soon to be heftier) salaries. I think it is certainly possible.