New Comment
11 comments, sorted by Click to highlight new comments since: Today at 9:14 AM

Oh, this is much more complete, thanks.

Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Both of those Ito remarks referenced supposedly widespread perspectives. But personally, i have almost never encountered these perspectives before.

Time to Godwin myself:

1930's Germany: The problem with relativity is that it's developed by Jews. We need an ethnically pure physics.

2010's USA : The problem with AI is that it's developed by white men. We need an ethnically diverse compsci.

White house also relized a pdf with concrete recommendations: http://barnoldlaw.blogspot.ru/2016/10/intelligence.html

Some interesting lines:

Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.

Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

Wow, very surprising! 13 sounds very MIRI-ish.

Then there could be an algorithm that said, “Go penetrate the nuclear codes and figure out how to launch some missiles.” If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems. I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing.

Please tell me this isn't an actual possibility. Surely nuclear launch must rely on multi-factor authentication with one-time-pads and code phrases in sealed, physical envelopes. A brain the size of a planet could not break a one-time pad. I know a superhuman AI could probably hack the net, but please tell me that nuclear missiles are not connected to the internet.

But... Obama must know the capacity of America's nuclear security. The best reason I can think for him to raise this possibility is to confuse America's enemies into thinking that the nuclear weapons are not properly secured, so that they will attack the nuclear launch codes which are actually secure, rather than attempting a more low-tech attack like another September 11.

I think the best reason for him to raise that possibility is to give a clear analogy. Nukes are undoubtedly airgapped from the net, and there's no chance anyone with the capacity to penetrate would think otherwise. It's just an easy to grasp way for him to present it to the public.

Well, security isn't really about the attack vectors you are aware of (trying to guess the one-time pad), it's about keeping an eye out for corner cases you are not yet aware of. An extremely sophisticated software system would be more likely to try avenues like causing a diplomatic crisis, manipulating people who have access to the codes, direct observation of the authentication data via specialized hardware, etc.

Also, yes, he was probably speaking informally / inaccurately.

tl;dr Obama doesn't really now what he's talking about but tries to use talking points to make sense of the new project.