Comment author: Vladimir_Nesov 18 November 2009 02:42:33PM *  3 points [-]

In what contexts is the action you mention worth performing? Why are "critics" a relevant concern? In my perception, normal technical science doesn't progress by criticism, it works by improving on some of existing work and forgetting the rest. New developments allow to see some old publications as uninteresting or wrong.

Comment author: mormon2 18 November 2009 04:46:59PM 3 points [-]

"In what contexts is the action you mention worth performing?"

If the paper was endorsed by the top minds who support the singularity. Ideally if it was written by them. So for example Ray Kurzweil whether you agree with him or not he is a big voice for the singularity.

"Why are "critics" a relevant concern?"

Because technical science moves forward through peer-review and the proving and the disproving of hypotheses. The critics help prevent the circle jerk phenomena in science assuming they are well thought out critiques. Because outside review can sometimes see fatal flaws in ideas that are not necessarily caught by those who work in the field.

"In my perception, normal technical science doesn't progress by criticism, it works by improving on some of existing work and forgetting the rest. New developments allow to see some old publications as uninteresting or wrong."

Have you ever published in a peer-review journal? If not the last portion of your post I will ignore, if so perhaps your could expound on it a bit more.

Comment author: wedrifid 15 November 2009 10:50:09AM *  0 points [-]

Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ...

That 'probably not even then' part is significant.

That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.

Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too.

Comment author: mormon2 15 November 2009 04:59:01PM 3 points [-]

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ..."

"That 'probably not even then' part is significant."

My implication was that the idea that he can create FAI completely outside the academic or professional world is ridiculous when you're speaking from an organization like SIAI which does not have the people or money to get the job done. In fact SIAI doesn't have enough money to pay for the computing hardware to make human level AI.

"Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too."

If he doesn't agree with it now, I am sure he will when he runs into the problem of not having the money to build his AI or not having enough time in the day to solve the problems that will be associated with constructing the AI. Not even mentioning the fact that when you close yourself to outside influence that much you often end up with ideas that are riddled with problems, that if someone on the outside had looked at the idea they would have pointed the problems out.

If you have never taken an idea from idea to product this can be hard to understand.

Comment author: Eliezer_Yudkowsky 14 November 2009 01:19:06AM 6 points [-]

You're being slightly silly. I simply don't expect them to pay any attention to me one way or another. As it stands, if e.g. Horvitz showed up and asked questions, I'd immediately direct him to http://singinst.org/AIRisk.pdf (the chapter I did for Bostrom), and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries. Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

FYI, I've talked with Peter Norvig a bit. He was mostly interested in the CEV / FAI-spec part of the problem - I don't think we discussed hard takeoffs much per se. I certainly wouldn't have brushed him off if he'd started asking!

Comment author: mormon2 15 November 2009 03:50:15AM 8 points [-]

"and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries."

Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it."

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.

I am sorry I prefer to be blunt.. that way there is no mistaking meanings...

Comment author: Alicorn 14 November 2009 02:37:11AM *  3 points [-]

What do you think "intelligence" is?

Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?

Comment author: mormon2 14 November 2009 06:06:42PM 1 point [-]

"Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)"

Couldn't have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.

Comment author: Alicorn 14 November 2009 02:19:36AM 3 points [-]

If you want to claim you're smart you have to have accomplishments that back it up right?

I think you have confused "smart" with "accomplished", or perhaps "possessed of a suitably impressive resumé".

Comment author: mormon2 14 November 2009 02:24:39AM *  2 points [-]

No, because I don't believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.

Comment author: alyssavance 13 November 2009 07:23:49PM 2 points [-]

Of course startups sometimes lose; they certainly aren't invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998.

"If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile."

(citation needed)

Comment author: mormon2 14 November 2009 01:56:06AM 0 points [-]

Ok, here are some people:

Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who's names I won't mention since I doubt you'd know them from Johns Hopkins Applied Physics Lab where I did some work. etc.

I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms and vague to be solved problems.

My analysis of EY is based on having worked in AI and knowing people in AI none of whom talk about their importance in the field as much as EY with as few papers, and breakthroughs as EY. If you want to claim you're smart you have to have accomplishments that back it up right? Where are EYs publications, where is the math for his TDT? The worlds hardest math problem is unlikely to be solved by someone who needs to hire someone with more depth in the field of math. (both statements can be referenced to EY)

Sorry this is harsh but there it is.

Comment author: alyssavance 13 November 2009 12:10:07AM 3 points [-]

"It's quite hard for one or a few people to be significantly more successfully innovative than usual, and the rest of the world is much, much bigger than SIAI."

I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use.

Comment author: mormon2 13 November 2009 05:53:45PM 0 points [-]

"I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use."

I don't think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small startup, and having been in business for many years this sort of thing your describing happens often.

As for your last statement I am sorry but you have not met that many intelligent people if you believe this. If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile.

I might be more inclined to agree if EY would post some worked out TDT problems with the associated math. hint...hint...

Comment author: Vladimir_Nesov 11 November 2009 02:35:21PM *  6 points [-]

Which areas of science or angles of analysis currently seem relevant to the FAI problem, and which of those you've studied seem irrelevant? What about those that fall on the "AI" side of things? Fundamental math? Physics?

Comment author: mormon2 11 November 2009 05:22:28PM 3 points [-]

I think we can take a good guess on the last part of this question on what he will say: Bayes Theorem, Statistics, basic Probability Theory Mathematical Logic, and Decision Theory.

But why ask the question with this statement made by EY: "Since you don't require all those other fields, I would like SIAI's second Research Fellow to have more mathematical breadth and depth than myself." (http://singinst.org/aboutus/opportunities/research-fellow)

My point is he has answered this question before...

I add to this my own question actually it is more of a request to see EY demonstrate TDT with some worked out math on a whiteboard or some such on the video.

Comment author: mormon2 03 November 2009 12:16:24AM 16 points [-]

I was wondering if Eliezer could post some details on his current progress towards the problem of FAI? Specifically details as to where he is in the process of designing and building FAI. Also maybe some detailed technical work on TDT would be cool.

Comment author: [deleted] 02 November 2009 10:34:39AM 5 points [-]

So, I'm having one of those I-don't-want-to-go-to-school moments again. I'm in my first year at a university, and, as often happens, I feel like it's not worth my time.

As far as math goes, I feel like I could learn all the facts my classes teach on Wikipedia in a tenth of the time--though procedural knowledge is another matter, of course. I have had the occasional fun chat with a professor, but the lecture was never it.

As far as other subjects go, I think forces conspired to make me not succeed. I had a single non-math class, though it was twice the length of a normal class and officially two classes. It was about ancient Greece and Rome, and we had to read things like Works and Days and the Iliad. Afterwards, we were supposed to write a paper about depictions of society in the two works or something. I never wrote the paper, and I dropped the class.

Is school worth it for the learning? How about for the little piece of paper I get at the end?

In response to comment by [deleted] on Open Thread: November 2009
Comment author: mormon2 02 November 2009 03:56:29PM 4 points [-]

This is going to sound horrible but here goes:

In my experience schools value depends on how smart you are. For example if you can teach yourself math you can often test out of classes. If your really smart you may be able to get out of everything but grad-school. Depending on what you want to do you may or may not need grad school.

Do you have a preferred career path? If so have you tried getting into it without further schooling? The other question is what have you done outside of school? Have you started any businesses or published papers?

With a little more detail I think the question can be better answered.

View more: Prev | Next