Eliezer_Yudkowsky comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong

29 Post author: AnnaSalamon 01 December 2009 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 02 December 2009 04:34:54AM 4 points [-]

For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI?

That's my end of the problem.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA

Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.

Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets.

Comment author: alexflint 02 December 2009 12:33:07PM 5 points [-]

Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.

I'm not sure the olympiads are such a uniquely optimal selector. For sure there were lots of superstars at the IOI, but now doing a phd makes me realise that many of those small-scale problem solving skills don't necessarily transfer to broader-scale AI research (putting together a body of work, seeing analogies between different theories, predicting which research direction will be most fruitful). Equally I met a ton of superstars working at Google, and I mean deeply brilliant superstars, not just well-trained professional coders. Google is trying to attract much the same crowd as SIAI, but they have a ton more resources, so insofar as it's possible it makes sense to try to recruit people from Google.

Comment author: AnnaSalamon 02 December 2009 07:27:05PM 4 points [-]

It would be nice if we could get both groups (international olympiads and Google) reading relevant articles, and thinking about rationality and existential risk. Any thoughts here, alexflint or others?

Comment author: alexflint 02 December 2009 09:37:24PM 6 points [-]

Well for the olympiads, each country runs training camp leading up to the actual olympiad and they'd probably be more than happy to have someone from SIAI give a guest lecture. These kids would easily pick up the whole problem from a half hour talk.

Google also has guest speakers and someone from SIAI could certainly go along and give a talk. It's a much more difficult nut to crack as Google has a somewhat insular culture and they're constantly dealing with overblown hype so many may tune out as soon as something that sounds too "futuristic" comes up.

What do you think?

Comment author: AnnaSalamon 02 December 2009 09:44:09PM *  3 points [-]

Yes, those seem worth doing.

Re: the national olympiad training camps, my guess is that it is easier to talk if an alumnus of the program recommends us. We know alumni of the US math olympiad camp, and the US computing olympiad camp, but to my knowledge we don't know alumni from any of the other countries or from other subjects. Do you have connections there, Alex? Anyone else?

Comment author: Kevin 07 March 2010 09:08:09AM *  2 points [-]

What about reaching out to people who scored very highly when taking the SATs as 7th graders? Duke sells the names and info of the test-takers to those that can provide "a unique educational opportunity."

http://www.tip.duke.edu/talent_searches/faqs/grade_7.html#release

Comment author: alexflint 03 December 2009 08:51:02AM 1 point [-]

Sure, but only in Australia I'm afraid :). If there's anyone from SIAI in that part of the world then I'm happy to put them in contact.

Comment author: Jack 02 December 2009 01:08:06PM 2 points [-]

Thinking about this point is leading me to conclude that Google is substantially more likely than SIAI to develop a General AI before anyone else. Gintelligence anyone?

Comment author: alexflint 02 December 2009 05:10:43PM 1 point [-]

Well, I don't think Google is working on GAI explicitly (though I wouldn't know), and I think they're not working on it for much the same reason that most research labs aren't working on it: it's difficult, risky research, outside the mainstream dogma, and most people don't put very much thought into the implications.

Comment author: Jack 02 December 2009 07:04:49PM *  4 points [-]

I think the conjunction of the probability that (1) Google decides to start working on it AND the probability that Google can (2) put together a team that could develop an AGI AND the probability that (3) that team succeeds might be higher than the probability of (2) and (3) for SIAI/Eliezer.

(1) Is pretty high because Google gets its pick of the most talented young programmers and gives them a remarkable amount of freedom to pursue their own interests. Especially if interest in AI increases it wouldn't be surprising if a lot of people with an interest in AGI ended up working there. I bet a fair number already do.

2/3 are high because Google's resources, their brand/reputation and the fact that they've shown they are capable of completing and deploying innovative code and business ideas.

All of the above is said with very low confidence.

Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we'd all be screwed.

Edit: I knew this would get downvoted :-)... or not.

Comment author: wedrifid 03 December 2009 03:05:51AM 1 point [-]

Edit: I knew this would get downvoted :-)

I voted up. I think you may be mistaken but you are looking at relevant calculations.

Of course Gintelligence might include censoring the internet for the Chinese government as part of its goal architecture and we'd all be screwed.

Nice.

Comment author: alexflint 02 December 2009 09:00:02PM 0 points [-]

Fair point. I actually rate (1) quite low just because there are so few people that think along the lines of AGI as an immediate problem to be solved. Tenured professors, for example, have a very high degree of freedom, yet very few of them chose to pursue AGI in comparison to the manpower dedicated to other AI fields. Amongst Googlers there is presumably also a very small fraction of folks potentially willing to tackle AGI head-on.

Comment author: Vladimir_Nesov 02 December 2009 11:27:06AM 2 points [-]

Only if you can expect to manage to get a supply of these folks. On the absolute scale, assuming that level of ability X is absolutely necessary to make meaningful progress (where X is relative to current human population) seems as arbitrary as assuming that human intelligence is exactly the greatest possible level of intelligence theoretically possible. FAI still has a lot of low-hanging fruit, simply because the problem was never seriously considered in this framing.

Comment author: mormon2 02 December 2009 07:07:52AM 5 points [-]

"That's my end of the problem."

Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?

"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."

So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?

"Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets."

If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?

Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.

Comment author: Eliezer_Yudkowsky 02 December 2009 07:42:30AM *  10 points [-]

Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.

I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.

The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.

Comment author: mormon2 03 December 2009 02:25:14AM 2 points [-]

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive."

Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?

Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?

"So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?"

So I take it from the fact that you didn't answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).

Comment author: Nick_Tarleton 03 December 2009 03:13:17AM 4 points [-]

Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?

Truth-seeking is not about fairness.

Comment author: wedrifid 03 December 2009 02:47:10AM 3 points [-]

Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).

Really, we get it. We don't have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.

Comment author: Vladimir_Nesov 03 December 2009 09:26:13AM 1 point [-]

Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work.

The hypothesis is that yes, they won't work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as "impressive". What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?

Comment author: wedrifid 03 December 2009 02:48:04AM *  1 point [-]

Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?

For this analogy to hold there would need to be an existing complete theory of AGI.

(There would also need to be something in the theory or proposed application analogous to "hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!")

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

These are good questions. Particularly the TDT one. Even if the answer happened to be "not that important".

Comment author: Eliezer_Yudkowsky 03 December 2009 05:34:22AM 3 points [-]

I was working on something related to TDT this summer, can't be more specific than that. If I get any of the remaining problems in TDT nailed down beyond what was already presented, and it's not classified, I'll let y'all know. Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.

LOGI's out the window, of course, as anyone who's read the arc of LW could very easily guess.

Comment author: anonym 03 December 2009 04:20:40PM 11 points [-]

Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.

I'm curious to know your reasoning behind this, if you can share it.

It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.

Comment author: wedrifid 03 December 2009 06:00:24AM 0 points [-]

Thanks for the update. Hopefully one of the kids you invite to visit has a knack for translating into impressive and you can delegate.

Comment author: [deleted] 05 December 2009 07:46:41AM 1 point [-]

Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.

No? I've been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.

And it happens that I have a copy of "Do the Right Thing: Studies in Limited Rationality", but I'm not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.

Comment author: [deleted] 03 December 2009 03:20:15AM 3 points [-]

. . . Java is slower then C++ because of all the overheads of running the code. . . .

A fast programming language is the last thing we need. Literally--when you're trying to create a Friendly AI, compiling it and optimizing it and stuff is probably the very last step.

(Yes, I did try to phrase the latter half of that in such a way to make the former half seem true, for the sake of rhetoric.)

Comment author: Vladimir_Nesov 02 December 2009 11:32:17AM 3 points [-]

If that's the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented?

He is solving a wrong problem (i.e. he is working towards destroying the world), but that's completely tangential.

Comment author: timtyler 09 December 2009 02:59:33PM 1 point [-]

It seems like FUD. I doubt Ben Goertzel is working towards destroying the world. It seems much more likely that the whole idea is a paranoid hallucination.

Comment author: DanArmak 03 December 2009 11:41:47AM 1 point [-]

Java is slower then C++ because of all the overheads of running the code.

Those damnable overheads. Assembly language FTW!

</sarcasm>

Comment author: wedrifid 03 December 2009 02:57:07AM 0 points [-]

Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code.

A world in which a segfault in an FAI could end it.

Comment author: Nick_Tarleton 03 December 2009 03:05:19AM *  2 points [-]

I hope any FAI would be formally verified to a far greater extent than any existing JVM.

Comment author: timtyler 09 December 2009 03:04:06PM 0 points [-]

Formal verification is typically only used in highly safety-critical systems - and often isn't used there either. If you look at the main applications for intelligent systems, not terribly many are safety critical - and the chances of being able to do much in the way of formal verification at a high-level seems pretty minimal anyway.

Comment deleted 02 December 2009 08:07:47AM [-]
Comment author: Eliezer_Yudkowsky 02 December 2009 09:13:23AM 2 points [-]

Oh, hell yeah. Anna's side can recruit them, no problem. And I'm certainly not saying that no one who works at these organizations could make the cut for the Final Programmers. Just that you can't hire Final Programmers at random from anywhere, not even Google.