New Comment
14 comments, sorted by Click to highlight new comments since: Today at 11:51 AM

A voice tells me that we're out of time. The future of the world will now be decided at Deep Mind, or by some other group at their level.

You should probably stop listening to random voices.


More seriously, do you want to make a concrete bet on something?

How much are you willing to lose?

Let's say 100 dollars, but the amount is largely symbolic. The function of the bet is to try to clarify what specifically you are worried about. I am happy to do less -- whatever is comfortable.

Wake up! In three days, that AI evolved from knowing nothing, to comprehensively beating an earlier AI which had been trained on a distillation of the best human experience. Do you think there's a force in the world that can stand against that kind of strategic intelligence?

So, a concrete bet then? What specifically are you worried about? In the form of a falsifiable claim, please.


edit: I am trying to make you feel better, the real way. The empiricist way.

A brief reply.

Strategy is nothing without knowledge of the terrain.

Knowledge of the terrain might be hard to get reliably

Therefore there might be some time between AGI being developed and it being able to reliably acquire the knowledge. If these people that develop it are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

Knowledge of the terrain might be hard to get reliably

Knowing that the world is made of atoms should take an AI a long way.

If these people that develop [AGI] are friendly they might decide to distribute it to other people to make it harder for any one project to take off.

I hold to the classic definition of friendly AI as being AI with friendly values, which retains them (or even improves them) as it surpasses human intelligence and otherwise self-modifies. As far as I'm concerned, AlphaGo Zero demonstrates that raw problem-solving ability has crossed a dangerous threshold. We need to know what sort of "values" and "laws" should govern the choices of intelligent agents with such power.

... and there is only one choice I'd expect them to make, in other words, no actual decision at all.

If anyone wants more details, I have extensive discussion & excerpts from the paper & DM QAs at https://www.reddit.com/r/reinforcementlearning/comments/778vbk/mastering_the_game_of_go_without_human_knowledge/

Interesting that resnets still seem state of the art. I was expecting them to have been replaced by something more heterogeneous by now. But I might be overrating the usefulness of discrete composition because it's easy to understand.