Eliezer_Yudkowsky comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (264)
"That's my end of the problem."
Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?
"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."
So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?
"Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets."
If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?
Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.
Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.
I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.
The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.
Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?
"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive."
Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?
Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?
"So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?"
So I take it from the fact that you didn't answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).
Truth-seeking is not about fairness.
Really, we get it. We don't have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.
The hypothesis is that yes, they won't work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as "impressive". What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?
For this analogy to hold there would need to be an existing complete theory of AGI.
(There would also need to be something in the theory or proposed application analogous to "hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!")
These are good questions. Particularly the TDT one. Even if the answer happened to be "not that important".
I was working on something related to TDT this summer, can't be more specific than that. If I get any of the remaining problems in TDT nailed down beyond what was already presented, and it's not classified, I'll let y'all know. Writing up the math I've already mentioned with impressive Greek symbols so it can be published is lower priority than the rationality book.
LOGI's out the window, of course, as anyone who's read the arc of LW could very easily guess.
I'm curious to know your reasoning behind this, if you can share it.
It seems to me that the publication of some high-quality technical papers would increase the chances of attracting and keeping the attention of one-in-a-million people like this much more than a rationality book would.
Thanks for the update. Hopefully one of the kids you invite to visit has a knack for translating into impressive and you can delegate.
No? I've been thinking of both problems as essentially problems of rationality. Once you have a sufficiently rational system, you have a Friendliness-capable, proto-intelligent system.
And it happens that I have a copy of "Do the Right Thing: Studies in Limited Rationality", but I'm not reading it, even though I feel like it will solve my entire problem perfectly. I wonder why this is.