Comment author: Roko 17 December 2009 09:42:22AM 3 points [-]

Mormon2:

How would you act if you were Eliezer?

Bear in mind that you could either work directly on the problem, or you could try to cause others to work on it. If you think that you could cause an average of 10 smart people to work on the problem for every 6 months you spend writing/blogging, how much of your life would you spend writing/blogging versus direct work on FAI?

Comment author: mormon2 18 December 2009 01:53:01AM 0 points [-]

"How would you act if you were Eliezer?"

If I made claims of having a TDT I would post the math. I would publish papers. I would be sure I had accomplishments to back up the authority with which I speak. I would not spend a single second blogging about rationality. If I used a blog it would be to discuss the current status of my AI work and to have a select group of intelligent people who could read and comment on it. If I thought FAI was that important I would be spending as much time as possible finding the best people possible to work with and would never resort to a blog to try to attract the right sort of people (I cite LW as evidence of the failure of blogging to attract the right people).

Oh and for the record I would never start a non-profit to do FAI research. I also would do away with the Singularity Summit and replace it with more AGI conferences. I would also do away the most of SIAI's programs and replace them, and the money they cost, with researchers and scientists along with some devoted angel funders.

Comment author: Eliezer_Yudkowsky 13 December 2009 11:44:14AM 4 points [-]

Because Less Wrong is about human rationality, not the Singularity Institute, and not me.

Comment author: mormon2 13 December 2009 04:55:04PM 5 points [-]

I am going to respond to the general overall direction of your responses.

That is feeble, and for those who don't understand why let me explain it.

Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you're distracted from what you are being paid to do. (If you ever work with a VC and their money you'll know what I mean.)

When it comes to ensuring that SIAI continues to pay especially when you are the FAI researcher there justifying why you are writing a book on rationality which in no way solves FAI becomes extremely important.

EY ask yourself this what percent of the people interested in SIAI and donate are interested FAI? Then ask what percent are interested in rationality with no clear plan of how that gets to FAI? If the answer to the first is greater then the second then you have a big problem, because one could interpret the use of your time writing this book on rationality as wasting donated money unless there is a clear reason how rationality books get you to FAI.

P.S. If you want to educate people to help you out as someone speculated you'd be better off teaching them computer science and mathematics.

Remember my post drew no conclusions so for Yvain I have cast no stones I merely ask questions.

A question of rationality

4 mormon2 13 December 2009 02:37AM

Thank you For Your Participation

I would like to thank you all for your unwitting and unwilling participation in my little social experiment. If I do say so myself you all performed as I had hoped. I found some of the responses interesting, many them are goofy. I was honestly hoping that a budding rationalist community like this one would have stopped this experiment midway but I thank you all for not being that rational. I really did appreciate all the mormon2 bashing it was quite amusing and some of the attempts to discredit me were humorous though unsuccessful. In terms of the questions I asked I was curious about the answers though I did not expect to get any nor do I really need them; since I have a good idea of what the answers are just from simple deductive reasoning. I really do hope EY is working on FAI and actually is able to do it though I certainly will not stake my hopes or money on it. 

Less there be any suspicion I am being sincere here.

 

Response

Because I can I am going to make one final response to this thread I started:

Since none of you understand what I am doing I will spell it out for you. My posts are formatted, written and styled intentionally for the response I desire. The point is to give you guys easy ways to avoid answering my questions (things like tone of the post, spelling, grammar, being "hostile (not really)" etc.). I just wanted to see if anyone here could actually look past that, specifically EY, and post some honest answers to the questions (real answers again from EY not pawns on LW). Obviously this was to much to ask, since the general responses, not completely, but for the most part were copouts. I am well aware that EY probably would never answer any challenge to what he thinks, people like EY typically won't (I have dealt with many people like EY). I think the responses here speak volumes about LW and the people who post here (If you can't look past the way the content is posted then you are going to have a hard time in life since not everyone is going to meet your standards for how they speak or write). You guys may not be trying to form a cult but the way you respond to a post like this screams cultish and even a some circle-jerk mentality mixed in there. 

 

Post

I would like to float an argument and a series of questions. Now before you guys vote me down please do me the curtsey of reading the post. I am also aware that some and maybe even many of you think that I am a troll just out to bash SIAI and Eliezer, that is in fact not my intent. This group is supposed to be about improving rationality so lets improve our rationality.

SIAI has the goal of raising awareness of the dangers of AI as well as trying to create their own FAI solution to the problem. This task has fallen to Eliezer as the paid researcher working on FAI. What I would like to point out is a bit of a disconnect between what SIAI is supposed to be doing and what EY is doing.

According to EY FAI is an extremely important problem that must be solved with global implications. It is both a hard math problem and a problem that needs to be solved by people who take FAI seriously first. To that end SIAI was started with EY as an AI researcher at SIAI. 

Until about 2006 EY was working on papers like CEV and working on designs for FAI which he has now discarded as being wrong for the most part. He then went on a long period of blogging on Overcoming Bias and LessWrong and is now working on a book on rationality as his stated main focus. If this be accurate I would ask how does this make sense from someone who has made such a big deal about FAI, its importance, being first to make AI and ensure it is FAI? If FAI is so important then where does a book on rationality fit? Does that even play into SIAI's chief goals? SIAI spends huge amounts of time talking about risks and rewards of FAI and the person who is supposed to be making the FAI is writing a book on rationality instead of solving FAI. How does this square with being paid to research FAI? How can one justify EY's reasons for not publishing the math of TDT, coming from someone who is committed to FAI? If one is committed to solving that hard of a problem then I would think that the publication of ones ideas on it would be a primary goal to advance the cause of FAI.

If this doesn't make sense then I would ask how rational is it to spend time helping SIAI if they are not focused on FAI? Can one justify giving to an organization like that when the chief FAI researcher is distracted by writing a book on rationality instead of solving the myriad of hard math problems that need to be solved for FAI? If this somehow makes sense then can one also state that FAI is not nearly as important as it has been made out to be since the champion of FAI feels comfortable with taking a break from solving the problem to write a book on rationality (in other words the world really isn't at stake)? 

Am I off base? If this group is devoted to rationality then everyone should be subjected to rational analysis.

Comment author: Eliezer_Yudkowsky 02 December 2009 07:42:30AM *  10 points [-]

Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.

I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mind that the advice to rush ahead and write code has told quite a lot of people that they don't in fact know which code to write, but has not actually produced anyone who does know which code to write. I know I can't sit down and write an FAI at this time; I don't need to spend five years writing code in order to collapse my pride.

The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive.

Comment author: mormon2 03 December 2009 02:25:14AM 2 points [-]

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once your "mysterious answer to mysterious question" detector is initialized and switched on, and so on - so I consider my work of explanation in that area to be pretty much done. Anything left is public relations, taking an existing explanation and making it more persuasive."

Ok, this being said where is your design? This reminds me of a movement in physics that wants to discard GR because it fails to explain some phenomena and is part of the rift in physics. Of course these people have nothing to replace GR with so the fact that you can argue that GR is not completely right is a bit pointless until you have something to replace it with, GR not being totally wrong. That being said how is your dismissal of the rest of AGI any better then that?

Its easy enough to sit back with no formal theories or in progress AGI code out for public review and say all these other AGI projects won't work. Even if that is the case it begs the question where are your contributions, your code, and published papers etc.? Without your formal working being out for public review is it really fair to make statements that all the current AGI projects are wrong-headed essentially?

"So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?"

So I take it from the fact that you didn't answer the question that you have in fact not worked for Intel or DARPA etc. That being said I think a measure of humility is an order before you categorically dismiss them as being minor players in FAI. Sorry if that sounds harsh but there it is (I prefer to be blunt because it leaves no room for interpretation).

Comment author: Eliezer_Yudkowsky 02 December 2009 04:34:54AM 4 points [-]

For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI?

That's my end of the problem.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA

Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate.

Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets.

Comment author: mormon2 02 December 2009 07:07:52AM 5 points [-]

"That's my end of the problem."

Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?

"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."

So tell me have you worked with anyone from DARPA (I have worked with DARPA) or Intel? Have you ever work at a research organization with millions or billions of dollars to throw at R&D? If not how can you be so sure?

"Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets."

If thats the case why does Ben Goertzel have a company working on AGI the very problem your trying to solve? Why does he actually have design and some portions implemented and you do not have any portions implemented? What about all the other AGI work being done like LIDA, SOAR, and what ever Peter Voss calls his AGI project, so are all of those just misguided since I would imagine they hire the people who work on the projects?

Just an aside for some posters above this post who have been talking about Java as the superior choice to C++ what planet do you come from? Java is slower then C++ because of all the overheads of running the code. You are much better off with C++ or Ct or some other language like that without all the overheads esp. since one can use OpenCL or CUDA to take advantage of the GPU for more computing power.

Comment author: mormon2 01 December 2009 06:06:38PM 8 points [-]

Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?

Comment author: ABranco 24 November 2009 12:58:09AM *  1 point [-]

Well, ok, success might be a personal measure, so by all means only Eliezer could properly say if Eliezer is successful. (Or at least, this is what should matter.)

Having said that, my saying he's successful was driven (biased?) by my personal standards. A positive (not in the sense of a biased article; in the sense that impact described is positive) Wikipedia article (how many people are in Wikipedia with picture and 10 footnotes? — but nevermind, this is a polemic variable, so let's not split hairs here) and founding something like SIAI and LessWrong deserve my respect, and quite some awe given his 'formal education'.

Comment author: mormon2 25 November 2009 03:04:13AM 3 points [-]

I am going to take a shortcut and respond to both posts:

komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.

In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won't admit it EY will need help from academia and the industry to make FAI, know him and more importantly respect his opinion?

ABranco: I would not say success is a personal measure I would say in many ways its defined by the culture. For example in America I think its fair to say that many would associate wealth and possessions with success. This may or may not be right but it cannot be ignored.

I think your last point is on the right track with EY starting SIAI and LessWrong with his lack of formal education. Though one could argue the relative level of significance or the level of success those two things dictate.

Comment author: ABranco 18 November 2009 07:28:27PM 13 points [-]

You've achieved a high level of success as a self-learner, without the aid of formal education.

Would this extrapolate as a recommendation of a path every fast-learner autodidact should follow — meaning: is it a better choice?

If not, in which scenarios not going after formal education be more advisable to someone? (Feel free to add as many caveats and 'ifs' as necessary.)

Comment author: mormon2 23 November 2009 07:37:34PM 4 points [-]

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

Comment author: Johnicholas 20 November 2009 05:45:05AM *  0 points [-]

Okay, here is how I think it might work - I am not a quantum computer programmer, so take my ideas with lots of salt.

Imagine a completely classical world. Imagine a bit (e.g. a coin faceup or facedown) inside of a container (e.g. a cup). Imagine a fundamental physical operation something like shaking the cup. If you don't know whether the bit is faceup or facedown, then you might imagine that inside the cup are two superimposed worlds. That is, you can imagine that the world is one possibility thin where you are, and then bubbles out to be two possibilities thin inside the cup.

When you shake the cup and then open the cup, one way to describe what happens is that the superimposed worlds "collapse", nondeterministically, into one of the possibilities. This is something like popping the bubble. Another way to describe what happens is that the bubble expands through you, splitting you, and one of the copies sees one of the possibilities, and the other copy sees the other possibility.

We can model entanglement in this classical world - imagine taking the cup-coin combination, and passing it through a duplicator. You still don' t know whether the coin is heads or tails, but "collapsing" one will also "magically" collapse the other.

This is all well and good, you say - but it isn't quantum computation.

My understanding here is fuzzier. As I understand it, there's an additional "imaginary" dimension in the quantum computation than there is in the classical possibility-worlds that we've been talking about. Sometimes, the bubble or stack of possible worlds, when viewed from the outside, can have constructive or destructive interference, as if the different worlds were transparencies that one can stack and look through.

To the QC novices - does that make sense? To the QC experts - is that even roughly true?

Comment author: mormon2 20 November 2009 08:36:18AM 5 points [-]

I recommend some reading: http://en.wikipedia.org/wiki/Quantum_computer Start with this and then if you want more detail look at: http://arxiv.org/pdf/quant-ph/9812037v1 The math isn't to difficult if you are familiar with math involved in QM, things like vectors, and matrices etc. http://www.fxpal.com/publications/FXPAL-PR-07-396.pdf This paper I skimmed it seems worth a read.

As to the author of the post to whom your responding what is your level of knowledge of quantum computing and quantum mechanics? By this I mean is your reading on the topic confined to Scientific American and what Eliezer has written or have you read for example Bohm on Quantum Theory?

Comment author: Vladimir_Nesov 18 November 2009 02:42:33PM *  3 points [-]

In what contexts is the action you mention worth performing? Why are "critics" a relevant concern? In my perception, normal technical science doesn't progress by criticism, it works by improving on some of existing work and forgetting the rest. New developments allow to see some old publications as uninteresting or wrong.

Comment author: mormon2 18 November 2009 04:46:59PM 3 points [-]

"In what contexts is the action you mention worth performing?"

If the paper was endorsed by the top minds who support the singularity. Ideally if it was written by them. So for example Ray Kurzweil whether you agree with him or not he is a big voice for the singularity.

"Why are "critics" a relevant concern?"

Because technical science moves forward through peer-review and the proving and the disproving of hypotheses. The critics help prevent the circle jerk phenomena in science assuming they are well thought out critiques. Because outside review can sometimes see fatal flaws in ideas that are not necessarily caught by those who work in the field.

"In my perception, normal technical science doesn't progress by criticism, it works by improving on some of existing work and forgetting the rest. New developments allow to see some old publications as uninteresting or wrong."

Have you ever published in a peer-review journal? If not the last portion of your post I will ignore, if so perhaps your could expound on it a bit more.

View more: Next