New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 4:40 AM

The buffering issues seem to affect only video not audio.

Nick Bostrom's part starts at 2:15.

Indeed as it says on the tin. Interesting to notice how they phrase AI risk in terms of CBRN and esp. use robotics and drones as a spring board.

They will organize an AI master class later the year in the netherlands.

They will organize an AI master class later the year in the netherlands.

What does that mean?

I guess something like this about AI risk. It was mentioned briefly in the video and I though it might interest somebody (as did this one).

It is interesting. Something like that seems like it would be extremely high-impact.

I can't believe how awareness has shot up. I thought it would remain a phenomenon in popular culture; now the global elite is taking it seriously and is being educated in various levels of detail about the problem.

To be fair, it seems that recently almost everyone can speak before a some kind of UN panel.

Which is good. The last thing I want is the UN to mess with AI. So, if it is just another UN panel, I don't have to worry.

Nick Bostrom seems to think it's useful to alert the UN to AI. The UN is currently the best thing we have that handles international cooperation.

I'm toning back my response. Nick had said previously that he didn't think policy action could help the field of AI now (except perhaps more funding) but that it could help prevent other existential risks. There still has to be some reason why he made this talk, though.

Added a YouTube link to the OP.