All of mormon2's Comments + Replies

mormon2-30

"How would you act if you were Eliezer?"

If I made claims of having a TDT I would post the math. I would publish papers. I would be sure I had accomplishments to back up the authority with which I speak. I would not spend a single second blogging about rationality. If I used a blog it would be to discuss the current status of my AI work and to have a select group of intelligent people who could read and comment on it. If I thought FAI was that important I would be spending as much time as possible finding the best people possible to work with and ... (read more)

2Roko
A good rationalist exercise is to try to predict what those who do not adopt your position would say in response to your arguments. What criticisms do you think I will make of the statement: ?
4Mitchell_Porter
I can see reasons for proceeding indirectly. Eliezer is 30. He thinks his powers may decline after age 40. It's said that it takes 10 years to become expert in a subject. So if solving the problems of FAI requires modes of thought which do not come naturally, writing his book on rationality now is his one chance to find and train people appropriately. It is also possible that he makes mistakes. Eliezer and SIAI are inadequately supported and have always been inadequately supported. People do make mistakes under such conditions. If you wish to see how seriously the mainstream of AI takes the problem of Friendliness, just search the recent announcements from MIT, about a renewed AI research effort, for the part where they talk about safety issues. I have a suggestion: Offer to donate to SIAI if Eliezer can give you a satisfactory answer. (The terms of such a deal may need to be negotiated first.)
0[anonymous]
Couldn't help yourself. The remainder is a reasonable answer.
1Dustin
If that was true, that still is not the same as his stated reason. You don't have enough information to state such a thing in such a conclusive manner.
0Dustin
Disregarding the fact that deleting a top level post is as easy as deleting a comment...how do you know this is his reason?
1Eliezer Yudkowsky
I can delete this post as easily as I can delete a comment; the same effort is involved either way. If you request it, I would be quite happy to provide experimental evidence to this effect.
-1wedrifid
Voted down for misuse of the word 'exemplar'. Close, it would show up as '1 comment below threshold', an exemplar of success for the karma system.
3Dustin
You're that sure that at this point in time you have all the information you'd ever need to make that decision?

Why because I ask questions that when answered honestly you don't like?

The questions are fine. I think it's the repetitiveness, obvious hostility, and poor grammar and spelling that get on people's nerves.

mormon220

I am going to respond to the general overall direction of your responses.

That is feeble, and for those who don't understand why let me explain it.

Eliezer works for SIAI which is a non-profit where his pay depends on donations. Many people on LW are interested in SIAI and some even donate to SIAI, others potentially could donate. When your pay depends on convincing people that your work is worthwhile it is always worth justifying what you are doing. This becomes even more important when it looks like you're distracted from what you are being paid to do. (If... (read more)

2Kaj_Sotala
Rationality is the art of not screwing up - seeing what is there instead of what you want to see, or are evolutionarily suspectible to seeing. When working on a task that may have (literally) earth-shattering consequences, there may not be a skill that's more important. Getting people educated about rationality is of prime importance for FAI.
4Zack_M_Davis
Even on the margin? There are already lots of standard textbooks and curricula for mathematics and computer science, whereas I'm not aware of anything else that fills the function of Less Wrong.
3Eliezer Yudkowsky
If you are previously a donor to SIAI, I'll be happy to answer you elsewhere. If not, I am not interested in what you think SIAI donors think. Given your other behavior, I'm also not interested in any statements on your part that you might donate if only circumstances were X. Experience tells me better.

As for mormon1... also coincidental.

Bullshit. Note, if the names aren't evidence enough, the same misspelling of "namby-pamby" here and here.

I propose banning.

mormon210

How am I troll? Did I not make a valid point? Have I not made other valid points? You may disagree with how I say something but that in no way labels me a troll.

The intention of my comment was to find what the hope for EYs FAI goals are based on here. I was trying to make the point with the zero, zilch idea... that the faith in EY making FAI is essentially blind faith.

7wedrifid
I am not sure who here has faith in EY making FAI. In fact, I don't even recall EY claiming a high probability of such a success.
7Zack_M_Davis
I'm not so sure. You don't seem to be being downvoted for criticizing Eliezer's strategy or sparse publication record: you got upvoted earlier, as did CronoDAS for making similar points. But the hostile and belligerent tone of many of your comments does come off as kind of, well, trollish. Incidentally, I can't help but notice that subject and style of your writing is remarkably similar to that of DS3618. Is that just a coincidence?
mormon2-20

"As a curiosity, having one defector in a group who is visibly socially penalized is actually a positive influence on those who witness it (as distinct from having a significant minority, which is a negative influence.) I expect this to be particularly the case when the troll is unable to invoke a similarly childish response."

Wow I say one negative thing and all of a sudden I am a troll.

Let's consider the argument behind my comment:

Premises: Has EY ever constructed AI of any form FAI, AGI or narrow AI? Does EY have any degrees in any relevant fie... (read more)

9wedrifid
Nobody has done the first two (fortunately). I am not sure if he has created a narrow AI. I have, it took me a few years to realise that the whole subfield I was working in was utter bullshit. I don't disrespect anyone else for reaching the same conclusion. He can borrow mine. I don't need to make any paper planes any time soon and I have found ways to earn cash without earning the approval of any HR guys. No. He probably lacks the humility. Apart from that, probably yes if you gave him a year. There are experts in FAI? I would like to see some of those. Not the algorithm rich ones (that'd be a bad sign indeed) but the math ones certainly. I'm not sure I would be comfortable with your definition of 'rich' either. No. No. Both relevant.
3wedrifid
Because LaTeX has already been done. Zero, zilch, none and zip are not probabilities but the one I would assign is rather low. (Here is where 'shut up and do the impossible' fits in.) PS: Is it acceptable to respond to trolls when the post is voted up to (2 - my vote)?
mormon2-20

Thank you thats all I wanted to know. You don't have any math for TDT. TDT is just an idea and thats it just like the rest of your AI work. Its nothing more then nambi-pambi philosophical mumbo-jumbo... Well, I will spend my time reading people who have a chance of creating AGI or FAI and its not you...

To sum up you have nothing but some ideas for FAI, no theory, no math and the best defense you have is you don't care about the academic community. The other key one is that you are the only person smart enough to make and understand FAI. This delusion is ... (read more)

The problem is that even if nothing "impressive" is available at SIAI, there is no other source where something is. Nada. The only way to improve this situation is to work on the problem. Criticism would be constructive if you suggested a method of improvement on this situation, e.g. organize a new team that is expected to get to FAI more likely than SIAI. Merely arguing about status won't help to solve the problem.

You keep ignoring the distinction between AGI and FAI, which doesn't add sanity to this conversation. You may disagree that there is ... (read more)

mormon200

Ok opinions on the relative merits of the AGI projects mentioned aside you did not answer my first and the question. The question I am actually most interested in the answer too which is where is the technical work? I was looking for some detail as to what part of step one you are working on? So if TDT is important to your FAI then how is the math coming? Are you updating LOGI or are you discarding it and doing it all over?

"The arc of Less Wrong read start to finish should be sufficient for an intelligent person to discard existing AGI projects - once... (read more)

3Vladimir_Nesov
The hypothesis is that yes, they won't work as steps towards FAI. Worse, they might actually backfire. And FAI progress is not as "impressive". What do you expect should be done, given this conclusion? Continue running to the abyss, just for the sake of preserving appearance of productivity?
3Nick_Tarleton
Truth-seeking is not about fairness.
0wedrifid
For this analogy to hold there would need to be an existing complete theory of AGI. (There would also need to be something in the theory or proposed application analogous to "hey! We should make a black hole just outside our solar system because black holes are like way cool and powerful and stuff!") These are good questions. Particularly the TDT one. Even if the answer happened to be "not that important".
5wedrifid
Really, we get it. We don't have automated signatures on this system but we can all pretend that this is included in yours. All this serves is to create a jarring discord between the quality of your claims and your presumption of status.
mormon240

"That's my end of the problem."

Ok, so where are you in the process? Where is the math for TDT? Where is the updated version of LOGI?

"Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate."

So tell me have you worked with anyone from DARPA (I have worked with DARPA... (read more)

1DanArmak
Those damnable overheads. Assembly language FTW!
3[anonymous]
A fast programming language is the last thing we need. Literally--when you're trying to create a Friendly AI, compiling it and optimizing it and stuff is probably the very last step. (Yes, I did try to phrase the latter half of that in such a way to make the former half seem true, for the sake of rhetoric.)
0wedrifid
A world in which a segfault in an FAI could end it.
3Vladimir_Nesov
He is solving a wrong problem (i.e. he is working towards destroying the world), but that's completely tangential.

Goertzel, Voss, and similar folks are not working on the FAI problem. They're working on the AGI problem. Contrary to what Goertzel, Voss, and similar folks find most convenient to believe, these two problems are not on the same planet or even in the same galaxy.

I shall also be quite surprised if Goertzel's or Voss's project yields AGI. Code is easy. Code that is actually generally intelligent is hard. Step One is knowing which code to write. It's futile to go on to Step Two until finishing Step One. If anyone tries to tell you otherwise, bear in mi... (read more)

mormon280

Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.

Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?

4Eliezer Yudkowsky
That's my end of the problem. Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. The professional skills I need don't exist, and so I look for proof of relevant talent and learning rate. Most people who consider this problem do not realize the degree to which it is sheerly impossible to put up a job ad for the skills you need. LW itself is probably as close as it gets.
5Roko
Speak for yourself ;-)
8Tyrrell_McAllister
I think that you answered your own question. One way to develop FAI is to attract talented people such as those at Google, etc. One way to draw such people is to convince them that FAI is worth their time. One way to convince them that FAI is worth their time is to lay out strong arguments for the risks and benefits of FAI.
mormon230

I am going to take a shortcut and respond to both posts:

komponisto: Interesting because I would define success in terms of the goals you set for yourself or others have set for you and how well you have met those goals.

In terms of respect I would question the claim not within SIAI or within this community necessarily but within the larger community of experts in the AI field. How many people really know who he is? How many people who need to know, because even if he won't admit it EY will need help from academia and the industry to make FAI, know him and m... (read more)

mormon240

"You've achieved a high level of success as a self-learner, without the aid of formal education."

How do you define high level of success?

3komponisto
He has a job where he is respected, gets to pursue his own interests, and doesn't have anybody looking over his shoulder on a daily basis (or any short-timescale mandatory duties at all that I can detect). That's pretty much the trifecta, IMHO.
2ABranco
Well, ok, success might be a personal measure, so by all means only Eliezer could properly say if Eliezer is successful. (Or at least, this is what should matter.) Having said that, my saying he's successful was driven (biased?) by my personal standards. A positive (not in the sense of a biased article; in the sense that impact described is positive) Wikipedia article (how many people are in Wikipedia with picture and 10 footnotes? — but nevermind, this is a polemic variable, so let's not split hairs here) and founding something like SIAI and LessWrong deserve my respect, and quite some awe given his 'formal education'.
mormon260

I recommend some reading: http://en.wikipedia.org/wiki/Quantum_computer Start with this and then if you want more detail look at: http://arxiv.org/pdf/quant-ph/9812037v1 The math isn't to difficult if you are familiar with math involved in QM, things like vectors, and matrices etc. http://www.fxpal.com/publications/FXPAL-PR-07-396.pdf This paper I skimmed it seems worth a read.

As to the author of the post to whom your responding what is your level of knowledge of quantum computing and quantum mechanics? By this I mean is your reading on the topic confined ... (read more)

0pre
Vague grasp of what the maths is supposed to do, without ever having actually worked through most of it. More than just SA and Eleizer, but mostly pretty much around that level. The trouble with the explore-and-prune way of describing these things is it automatically makes people fall into speculation on what's doing the choosing, how maybe 'consciousness' is picking the 'best' of the results and shaping the universe. Understand enough to know it ain't that, and that the maths tells us the probabilities of the outcomes, there's no 3rd party 'picking' the one most advantageous to 'em. But it's hard to get people to understand that without a good intuitive picture of what's really going on, just seemed to me that the problem was probably the 'collapse-like' system which everyone seems to fall back on when trying to produce this intuitive picture. Personally I should probably work through the maths at some point. It's on the list. The list is long though and I have a goddamned job so I never seem to get proper time for stuff. Not sure that having done that would help to convince people who certainly won't be working through the numbers that there's no special consciousness effect going on though.
mormon230

"In what contexts is the action you mention worth performing?"

If the paper was endorsed by the top minds who support the singularity. Ideally if it was written by them. So for example Ray Kurzweil whether you agree with him or not he is a big voice for the singularity.

"Why are "critics" a relevant concern?"

Because technical science moves forward through peer-review and the proving and the disproving of hypotheses. The critics help prevent the circle jerk phenomena in science assuming they are well thought out critiques. Becaus... (read more)

1Vladimir_Nesov
The actual experience of publishing a paper hardly adds anything that can't be understood without doing so. Peer-review is not about "critics" responding to endorsement by well-known figures, it's quality control (with whatever failing it may carry), and not a point where written-up public criticisms originate. Science builds on what's published, not on what gets rejected by peer review, and what's published can be read by all.
mormon230

"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it.

Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. ..."

"That 'probably not even then' part is significant."

My implication was that the idea that he can create FAI completely outside the academic... (read more)

9Eliezer Yudkowsky
And so the utter difference of working assumptions is revealed.
0wedrifid
I have. I've also failed to take other ideas to products and so agree with that part of your position, just not the argument as it relates to context.
mormon2100

"and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries."

Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expe... (read more)

2Alicorn
No.
0wedrifid
That 'probably not even then' part is significant. Now that is an interesting question. To what extent would Eliezer say that conclusion followed? Certainly less than the implied '1' and probably more than '0' too.
mormon210

"Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof)"

Couldn't have said it better myself. The only addition would be that IQ is an insufficient measure although it can be useful when combined with accomplishment.

mormon220

No, because I don't believe in using IQ as a measure of intelligence (having taken an IQ test) and I think accomplishments are a better measure (quality over quantity obviously). If you have a better measure then fine.

0wedrifid
I once came third in a marathon. How smart am I? If I increase my mileage to a level that would be required for me to come first would that make me smarter? Does the same apply when I'm trying to walk in 40 years? ETA: I thought I cancelled this one. Nevermind, I stand by my point. Achievement is the best predictor of future achievement. It isn't a particularly good measure of intelligence. Achievement shows far more about what kind of things someone is inclined to achieve (and signal) as well as how well they are able to motivate themselves than it does about intelligence (see, for example, every second page here). Accomplishments are better measures than IQ, but they are not a measure of intelligence at all.
3Alicorn
What do you think "intelligence" is? Do you think that accomplishments, when present, are fairly accurate proof of intelligence (and that you are skeptical of claims thereto without said proof), but that intelligence can sometimes exist in their absence; or do you claim something stronger?
mormon200

Ok, here are some people:

Nick Bostrom (http://www.nickbostrom.com/cv.pdf) Stephen Wolfram (Published his first particle physics paper at 16 I think, invented one of, if not, the most successful math programs ever and in my opinion the best ever) A couple people who's names I won't mention since I doubt you'd know them from Johns Hopkins Applied Physics Lab where I did some work. etc.

I say this because these people have numerous significant contributions to their fields of study. I mean real technical contributions that move the field forward not just terms... (read more)

0alyssavance
I agree that both Bostrom and Wolfram are very smart, but this does not a convincing case make. Even someone at 99.9999th percentile intelligence will have 6,800 people who are as smart or smarter than they are.
3Alicorn
I think you have confused "smart" with "accomplished", or perhaps "possessed of a suitably impressive resumé".
mormon200

"I would heavily dispute this. Startups with 1-5 people routinely out-compete the rest of the world in narrow domains. Eg., Reddit was built and run by only four people, and they weren't crushed by Google, which has 20,000 employees. Eliezer is also much smarter than most startup founders, and he cares a lot more too, since it's the fate of the entire planet instead of a few million dollars for personal use."

I don't think you really understand this; having recently been edged out by a large corporation in a narrow field of innovation, as a small ... (read more)

2alyssavance
Of course startups sometimes lose; they certainly aren't invincible. But startups out-competing companies that are dozens or hundreds of times larger does happen with some regularity. Eg. Google in 1998. "If you ever get out into the world you will find plenty of people who will make you feel like your dumb and that make EYs intellect look infantile." (citation needed)
mormon230

I think we can take a good guess on the last part of this question on what he will say: Bayes Theorem, Statistics, basic Probability Theory Mathematical Logic, and Decision Theory.

But why ask the question with this statement made by EY: "Since you don't require all those other fields, I would like SIAI's second Research Fellow to have more mathematical breadth and depth than myself." (http://singinst.org/aboutus/opportunities/research-fellow)

My point is he has answered this question before...

I add to this my own question actually it is more of a ... (read more)

mormon260

Ok, I am going to reply to both soreff and Thomas:

Particle physics isn't about making technology at least at the moment. Particle physics is concerned with understanding the fundamental elements of our world. As far as the details of the relevance of particle physics I won't waste the time to explain. Obviously neither of you have any real experience in the field. So this concludes what comments I am going to make on this topic until someone with real physics knowledge decides to comment.

mormon2160

I was wondering if Eliezer could post some details on his current progress towards the problem of FAI? Specifically details as to where he is in the process of designing and building FAI. Also maybe some detailed technical work on TDT would be cool.

8cousin_it
This email by Eliezer from 2006 addresses your question about FAI. I'm extremely skeptical that he has accomplished or will accomplish anything at all in that direction, but if he does, we shouldn't expect the intermediate results to be openly published, because half of a friendly AI is a complete unfriendly AI.
mormon260

What? Who voted this up?

"It is also quite possible that the Higgs boson will come out and it will be utterly useless, as most of those particles are."

So understanding the sub-atomic level for things like nano-scale technology in your books is a complete waste of time? Understanding the universe I can only assume is also a waste of time since the discovery of the Higgs Boson in your books is essentially meaningless in all probability.

"You can't do a thing with them and they don't tell you very much. Of course, the euphoria will be massive.&q... (read more)

2Thomas
No ato-tech in sight, no use for already discovered particles and you are telling me how valuable Higgs boson will be. Not only you but the whole CERN affiliated community and most of the media. I remain skeptic, if you don't mind.
mormon240

This is going to sound horrible but here goes:

In my experience schools value depends on how smart you are. For example if you can teach yourself math you can often test out of classes. If your really smart you may be able to get out of everything but grad-school. Depending on what you want to do you may or may not need grad school.

Do you have a preferred career path? If so have you tried getting into it without further schooling? The other question is what have you done outside of school? Have you started any businesses or published papers?

With a little more detail I think the question can be better answered.

mormon290

I apologize if this is blunt or already addressed but it seems to me that the voting system here has a large user based problem. It seems to me that the karma system has become nothing more then a popularity indicator.

It seems to me that many here vote up or down based on some gut-level agreement or disagreement with the comment or post. For example it is very troubling that some single line comments of agreement that should have 0 karma in my opinion end up with massive amounts and comments that may be in opposition to the popular beliefs here are voted ... (read more)

1RobinZ
I think you're on to something - many commenters (myself included) probably vote based more on agreement or disagreement than on anything else, and this necessarily reinforces the groupthink. If we wanted to fix it, the way to go would be to define standard rules for upvoting and downvoting which reduced the impact of opinion. It cannot be eliminated - if someone says something stupid, for example, saying it should not be rewarded - but a set of clear guidelines could change the karma meter from a popularity score to a filter sorting out the material worth paying attention to. I think a well-thought-out proposal of such a method could make a reasonable top-level post.
mormon240

True but the Blue Brain project is still very interesting and is and hopefully will continue to provide interesting results. Whether you agree with his theory or not the technical side of what they are doing is very interesting.

mormon240

"Articles should be legible to the audience. You can't just throw in a position written in terms that require special knowledge not possessed by the readers. It may be interesting, but then the goal should be exposition, showing importance and encouraging study."

I both agree with and disagree with this statement. I agree that a post should be written for the audience. I disagree in that I think people here spend a lot of time talking about QM and if they do not have the knowledge to understand this post then they should not be talking about QM. T... (read more)

4Vladimir_Nesov
Maybe they shouldn't (but not because they can't understand this post).
-1Mitchell_Porter
As I've been saying, I mean pseudo-Leibnizian monads (pseudo because unlike Leibniz's, they can interact), not computer-science monads.
mormon250

Am I the only one who is reminded of game theory reading this post. In fact it basically sounds like given a set of agents engaged in competitive behavior how does "information" (however you define it, which I think others are right to ask for clarification) effect the likely outcome? Though I am confused by the overly simple military examples. I would wonder if one could find a simpler system to use? I also am confused about what general principles with this system of derived inequalities you want to find?

mormon220

"TDT is very much a partial solution, a solution-fragment rather than anything complete. After all, if you had the complete decision process, you could run it as an AI, and I'd be coding it up right now."

I must nitpick here:

First you say TDT is an unfinished solution, but from all the stuff that you have posted there is no evidence that TDT is anything more than a vague idea; is this the case? If not could you post some math and example problems for TDT.

Second, I hope this was said in haste not in complete seriousness that if TDT was complete you... (read more)