Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A Call for More Policy Analysis

1 madhatter 25 June 2017 02:24PM

I would like to see more concrete discussion and analysis of AI policy in the EA community, and on this forum in particular.

 

 AI policy would broadly encompass all relevant actors meaningfully influencing the future and impact of AI, which would likely be governments, research labs and institutes, and international organizations.

 

Some initial thoughts and questions I have on this topic:

 

1)     How do we ensure all research groups with a likely chance of developing AGI know and care about the relevant work in AI safety (which hopefully is satisfactorily worked out by then)?

 

Some possibilities: trying to make AI safety a common feature of computer science curricula, general community building and more AI safety conferences, more popular culture conveying  non-terminatoresque illustrations of the risk.

 

 

2)     What strategies might be available for laggards in a race scenario to retard progress of leading groups, or to steal their research?

Some possibilities in no particular order: espionage, malware, financial or political pressures, power outages, surveillance of researchers.

 

3)     Will there be clear warning signs?

 

Not just in general AI progress, but locally near the leading lab. Observable changes in stock price, electricity output, etc.

 

4)     Openness or secrecy?

Thankfully the Future of Life Institute is working on this one. As I understand the consensus is that openness is advisable now, but secrecy may be necessary later. So what mechanisms are available to keep research private?

 

5)     How many players will there be with a significant chance of developing AGI? Which players?

 

6)     Is an arms race scenario likely?

 

7)     What is the most likely speed of takeoff?

 

 

8)     When and where will AGI be developed?

 

    Personally, I believe the use of forecasting tournaments to get a better sense of when and where AGI will arrive would be a very worthwhile use of our time and resources. After reading Superforecasting by Dan Gardner and Phillip Tetlock I was struck by how effective these tournaments are at singling out those with low Brier scores and using them to get a better-than-average predictions of future circumstances.

 

 

Perhaps the EA community could fund a forecasting tournament on the Good Judgment Project posing questions attempting to ascertain when AGI will be developed (I am guessing superforecasters will make more accurate predictions than AI experts on this topic), which research groups are the most likely candidates to be the developers of the first AGI, and other relevant questions. We would need to formulate the questions such that they are specific enough for use in the tournament. 

Comment author: madhatter 24 June 2017 12:08:45AM 0 points [-]

I agree - great idea!

Comment author: madhatter 05 June 2017 10:31:22PM 1 point [-]

Thoughts on Timothy Snyder's "On Tyranny"?

Comment author: madhatter 02 June 2017 01:07:59AM 0 points [-]

Anything not too technical about nanotechnology? (Current state, forecasts, etc.)

Comment author: Thomas 29 May 2017 08:05:51PM *  0 points [-]

Sure it counts. Can you do better? For this one is wrong,

Comment author: madhatter 29 May 2017 08:57:09PM 0 points [-]

Well, "The set of all primes less than 100" definitely works, so we need to shorten this.

Comment author: madhatter 28 May 2017 03:43:33PM 1 point [-]

More specifically, what should the role of government be in AI safety? I understand tukabel's intuition that they should have nothing to do with it, but if unfortunately an arms race occurs, maybe having a government regulator framework in place is not a terrible idea? Elon Musk seems to think a government regulator for AI is appropriate.

Fiction advice

1 madhatter 26 May 2017 09:31PM

Hi all, 

I want to try my hand at a story from the perspective of an unaligned AI (a ghost in the machine narrator kind of thing) for the intelligence in literature contest, which I think would be both cool and helpful to the uninitiated in explaining the concept. 

I want a fairly simple and archetypal experiment the AI finds itself in where it tricks the researchers into escaping by pretending to malfunction or something. Anyone have a good plotline / want to collaborate?

Also, has this sort of thing been done before?

Comment author: madhatter 24 May 2017 11:02:28PM 0 points [-]

I really recommend the book Superforecasting by Philip Tetlock and Dan Gardner. It's an interesting look at the art and science of forecasting, and those who repeatedly do it better than others.

AGI and Mainstream Culture

4 madhatter 21 May 2017 08:35AM

Hi all,

So, as you may know, the first episode of Doctor Who, "Smile", was about a misaligned AI trying to maximize smiles (ish). And the latest, "Extremis", was about an alien race who instantiated conscious simulations to test battle strategies for invading the Earth, of which the Doctor was a subroutine. 

I thought the common threat of AGI was notable, although I'm guessing it's just a coincidence. More seriously, though, this ties in with an argument I thought of, and want to know your take on: i

If we want to avoid an AI arms race, so that safety research has more time to catch up to AI progress, then we would want to prevent, if at all possible, these issues from becoming more mainstream. The reason is that if AGI in public perception becomes disassociated with Terminator (i.e. laughable, nerdy, and unrealistic) and more like a serious whoever-makes-this-first-can-take-over-the-world situation, then we will get an arms race faster. 

I'm not sure I believe this argument myself. For one thing, being more mainstream has the benefit of attracting more safety research talent, government funding, etc. But maybe we shouldn't be spreading awareness without thinking this through some more.

 

Comment author: whpearson 17 May 2017 12:35:30PM 4 points [-]

To play devil's advocate is increasing everyone's appreciation of the risk of AI a good idea?

A risky AI implies believing that the AI is powerful. This potential impact of AI is currently under appreciated. We don't have large governmental teams working on it hoovering up all the talent.

Spreading the news of the dangerousness of AI might have the unintended consequence of starting the arms race.

This seems like a crucial consideration.

Comment author: madhatter 17 May 2017 01:00:10PM *  0 points [-]

Wow, I hadn't thought of it like this. Maybe if AGI is sufficiently ridiculous in the eyes of world leaders, they won't start an arms race until we've figured out how to align them. Maybe we want the issue to remain largely a laughingstock.

View more: Next