Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: WalterL 17 July 2017 11:16:30PM 1 point [-]

The reliable verification methods are a dream, of course, but the 'forbidden from sharing this information with non-members' is even more fanciful.

Comment author: madhatter 17 July 2017 11:18:42PM *  0 points [-]

Is there no way to actually delete a comment? :)

Comment author: madhatter 17 July 2017 10:52:48PM *  0 points [-]

never mind this was stupid

Comment author: madhatter 12 July 2017 10:51:37PM 0 points [-]

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Comment author: madhatter 08 July 2017 03:40:26PM 0 points [-]

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

In response to Mini map of s-risks
Comment author: madhatter 08 July 2017 03:23:10PM 2 points [-]

Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don't instantiate anything with any non-negligible level of sentience?

Comment author: madhatter 07 July 2017 06:13:18AM 0 points [-]

Two random questions.

1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?

2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?

Comment author: madhatter 01 July 2017 02:11:32AM 0 points [-]

Any value in working on a website with resources on the necessary prerequisites for AI safety research? The best books and papers to read, etc. And maybe an overview of the key problems and results? Perhaps later that could lead to an ebook or online course.

A Call for More Policy Analysis

1 madhatter 25 June 2017 02:24PM

I would like to see more concrete discussion and analysis of AI policy in the EA community, and on this forum in particular.

 

 AI policy would broadly encompass all relevant actors meaningfully influencing the future and impact of AI, which would likely be governments, research labs and institutes, and international organizations.

 

Some initial thoughts and questions I have on this topic:

 

1)     How do we ensure all research groups with a likely chance of developing AGI know and care about the relevant work in AI safety (which hopefully is satisfactorily worked out by then)?

 

Some possibilities: trying to make AI safety a common feature of computer science curricula, general community building and more AI safety conferences, more popular culture conveying  non-terminatoresque illustrations of the risk.

 

 

2)     What strategies might be available for laggards in a race scenario to retard progress of leading groups, or to steal their research?

Some possibilities in no particular order: espionage, malware, financial or political pressures, power outages, surveillance of researchers.

 

3)     Will there be clear warning signs?

 

Not just in general AI progress, but locally near the leading lab. Observable changes in stock price, electricity output, etc.

 

4)     Openness or secrecy?

Thankfully the Future of Life Institute is working on this one. As I understand the consensus is that openness is advisable now, but secrecy may be necessary later. So what mechanisms are available to keep research private?

 

5)     How many players will there be with a significant chance of developing AGI? Which players?

 

6)     Is an arms race scenario likely?

 

7)     What is the most likely speed of takeoff?

 

 

8)     When and where will AGI be developed?

 

    Personally, I believe the use of forecasting tournaments to get a better sense of when and where AGI will arrive would be a very worthwhile use of our time and resources. After reading Superforecasting by Dan Gardner and Phillip Tetlock I was struck by how effective these tournaments are at singling out those with low Brier scores and using them to get a better-than-average predictions of future circumstances.

 

 

Perhaps the EA community could fund a forecasting tournament on the Good Judgment Project posing questions attempting to ascertain when AGI will be developed (I am guessing superforecasters will make more accurate predictions than AI experts on this topic), which research groups are the most likely candidates to be the developers of the first AGI, and other relevant questions. We would need to formulate the questions such that they are specific enough for use in the tournament. 

Comment author: madhatter 24 June 2017 12:08:45AM 0 points [-]

I agree - great idea!

Comment author: madhatter 05 June 2017 10:31:22PM 1 point [-]

Thoughts on Timothy Snyder's "On Tyranny"?

View more: Next