Daniel_Burfoot comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread.

Comment author: Daniel_Burfoot 11 November 2009 04:14:15PM 9 points [-]

Let E(t) be the set of historical information available up until some time t, where t is some date (e.g. 1934). Let p(A|E) be your estimate of the probability an optimally rational Bayesian agent would assign to the event "Self-improving artificial general intelligence is discovered before 2100" given a certain set of historical information.

Consider the function p(t)=p(A|E(t)). Presumably as t approaches 2009, p(t) approaches your own current estimate of p(A).

Describe the function p(t) since about 1900. What events - research discoveries, economic trends, technological developments, sci-fi novel publications, etc, caused the largest changes in p(t)? Is it strictly increasing, or does it fluctuate substantially? Did the publication of any impossibility proofs (e.g. No Free Lunch) cause strong decreases in p(t)? Can you point to any specific research results that increased p(t)? What about the "AI winter" and related setbacks?

Comment author: Peter_de_Blanc 12 November 2009 02:21:47AM 3 points [-]

I don't think this question behaves the way you want it to. Why not ask what a smart human would predict?

Comment author: MichaelVassar 13 November 2009 05:16:17AM 2 points [-]

I'd guess that WWII and particularly the Holocaust set it back rather a lot. How likely were they in 1934 though, possibly quite.