James_Miller comments on AALWA: Ask any LessWronger anything - Less Wrong

28 Post author: Will_Newsome 12 January 2014 02:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (611)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 12 January 2014 06:37:08PM 9 points [-]

When do you estimate that MIRI will start writing the code for a friendly AI?

Comment author: JoshuaFox 12 January 2014 07:06:53PM *  9 points [-]

Median estimate for when they'll start working on a serious code project (i.e., not just toy code to illustrate theorems) is 2017.

This will not necessarily be development of friendly AI -- maybe a component of friendly AI, maybe something else. (I have no strong estimates for what that other thing would be, but just as an example--a simulated-world sandbox).

Everything I say above (and elsewhere), is my opinion, not MIRIs. Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

Comment author: Eliezer_Yudkowsky 13 January 2014 10:59:05AM 10 points [-]

This is not a MIRI official estimate and you really should have disclaimed that.

Comment author: JoshuaFox 13 January 2014 12:33:00PM 1 point [-]

OK, I will edit this one as well to say that.

Comment author: Lumifer 12 January 2014 07:34:32PM 3 points [-]

What are the error bars around these estimates?

Comment author: JoshuaFox 12 January 2014 07:41:51PM 4 points [-]

The first estimate: 50% probability between 2015 and 2020.

The second estimate: 50% probability between 2020 and 2035. (again, taking into account all the conditioning factors).

Comment author: Lumifer 13 January 2014 03:25:08AM 4 points [-]

Um.

2017

50% probability between 2015 and 2020.

The distribution is asymmetric for obvious reasons. The probability for 2014 is pretty close to zero. This means that there is a 50% probability that a serious code project will start after 2020.

This is inconsistent with 2017 being a median estimate.

Comment author: [deleted] 13 January 2014 05:26:43PM 1 point [-]

Unless he thinks it's very unlikely the project will start between 2017 and 2020 for some reason.

Comment author: JoshuaFox 13 January 2014 09:10:34AM 1 point [-]

Good point. I'll have to re-think that estimate and improve it.

Comment author: Furcas 14 January 2014 10:54:31PM *  2 points [-]

If some rich individual were to donate 100 million USD to MIRI today, how would you revise your estimate (if at all)?

Comment author: Tenoke 13 January 2014 07:26:51AM 3 points [-]

Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

We're so screwed, aren't we?

Comment author: JoshuaFox 13 January 2014 08:53:11AM *  4 points [-]

Yes, but not because of MIRI. Along with FHI, they are doing more than anyone to improve our odds. As to whether writing code or any other strategy is the right one--I don't know, but I trust MIRI more than anyone to get that right.

Comment author: Tenoke 13 January 2014 09:10:53AM *  0 points [-]

Yes, but not because of MIRI.

Oh yes, I know that. It just says a lot that our best shot is still decades away from achieving it's goal.

Along with FHI, that are doing more than anyone to improve our odds.

Which, to be fair, isn't saying much.

Comment author: Calvin 13 January 2014 09:20:46AM 0 points [-]

Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable.

I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.

Comment author: Tenoke 13 January 2014 09:38:30AM 1 point [-]

I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.

And it will be quite likely at that point that we are much closer to having an AGI that will foom than to having an AI that won't kill us and that it is too late.

Comment author: Calvin 13 January 2014 11:08:06AM 1 point [-]

I know it is a local trope that death and destruction is apparent and necessary logical conclusion of creating an intelligent machine capable of self improvement and goal modification, but I certainly don't share those sentiments.

How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

I am explicitly saying that MRI or FAI are pointless, or anything like that. I just want to point out that they posture as if they were saving the world from imminent destruction, while it is no where certain weather said danger is really the case.

Comment author: Tenoke 13 January 2014 11:19:31AM *  3 points [-]

How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

1%? I believe that it is nearly impossible to use a foomed AI in a safe manner without explicitly trying to do so. That's kind of why I am worried about the threat of any uFAI developed before it is proven that we can develop a Friendly one and without using whatever the proof entails.

Anyway,

...would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

I wasn't aware that we use a 100% of our current technology in a safe way.

Comment author: hairyfigment 14 January 2014 02:44:22AM 0 points [-]

You may have a different picture of current technology than I do, or you may be extrapolating different aspects. We're already letting software optimize the external world directly, with slightly worrying results. You don't get from here to strictly and consistently limited Oracle AI without someone screaming loudly about risks. In addition, Oracle AI has its own problems (tell me if the LW search function doesn't make this clear).

Some critics appear to argue that the direction of current tech will automatically produce CEV. But today's programs aim to maximize a behavior, such as disgorging money. I don't know in detail how Google filters its search results, but I suspect they want to make you feel more comfortable with links they show you, thus increasing clicks or purchases from sometimes unusually dishonest ads. They don't try to give you whatever information a smarter, better informed you would want your current self to have. Extrapolating today's Google far enough doesn't give you a Friendly AI, it gives you the making of a textbook dystopia.

Comment author: djm 15 January 2014 01:39:26AM 0 points [-]

Can you elaborate on the types of toy code that you (or others) have tried in terms of illustrating theoreoms?

Comment author: JoshuaFox 15 January 2014 08:26:27AM 1 point [-]

I have not tried any.

Over the years, I have seen a few online comments about toy programs written by MIRI people, e.g., this, search for "Haskell". But I don't know anything more about these programs that those brief reports.