My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.

New to LessWrong?

New Comment
172 comments, sorted by Click to highlight new comments since: Today at 12:22 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]gwern12y540

Let's take the outside view for a second. After all, if you want to save the planet from AIs, you have to do a lot of thinking! You have to learn all sorts of stuff and prove it and just generally solve a lot of eye-crossing philosophy problems which just read like slippery bullshit. But if you want to save the planet from asteroids, you can conveniently do the whole thing without ever leaving your own field and applying all the existing engineering and astronomy techniques. Why, you even found a justification for NASA continuing to exist (and larding out pork all over the country) and better yet, for the nuclear weapons program to be funded even more (after all, what do you think you'll be doing when the Shuttle gets there?).

Obviously, this isn't any sort of proof that anti-asteroid programs are worthless self-interested rent-seeking government pork.

But it sure does seem suspicious that continuing business as usual to the tune of billions can save the entire species from certain doom.

8aaronsw12y
Yes, I agree that if a politician or government official tells you the most effective thing you can do to prevent asteroids from destroying the planet is "keep NASA at current funding levels and increase funding for nuclear weapons research" then you should be very suspicious.
[-]gwern12y350

I think you're missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.

Your application of cynicism proves everything, and so proves nothing. Every strategy can be - rightly - pointed out to benefit some group and disadvantage some other group.

The only time this wouldn't apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so couldn't benefit from attempts to address it. And in that case, one would be vastly more justified in discounting them because they themselves don't seem to actually believe it rather than believing them because this particular use of Outside View doesn't penalize them.

(Or to put it another more philosophical way: what sort of agent believes that X is a valuable problem to work on, and also doesn't believe that whatever Y approach he is taking is the best approach for him to be taking? One can of course believe that there are better approaches for other people - 'if I were a mathematical genius, I could be making more progress on FAI t... (read more)

My suspicion isn't because the recommended strategy has some benefits, it's because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn't require us to do anything particularly hard. What's suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.

7MichaelVassar12y
FHI, for what it's worth, does say that simulation shutdown is underestimated but doesn't suggest doing anything.
4pleeppleep12y
To be fair though, a lot of us would learn the tricky philosophy stuff anyway just because it seems interesting. It is pretty possible that our obsession with FAI stems partially from the fact that the steps needed to solve such a problem appeal to us. Not to say that FAI isn't EXTREMELY important by its own merits, but there are a number of existential risks that pose relatively similar threat levels that we don't talk about night and day.
6MichaelVassar12y
My actual take is that UFAI is actually a much larger threat than other existential risks, but also that working on FAI is fairly obviously the chosen path, not on EV grounds, but on the grounds of matching our skills and interests.
2IlyaShpitser12y
"But it sure does seem suspicious that continuing business as usual can save the entire species from certain doom." Doesn't this sentence apply here? What exactly is this community doing that's so unusual (other than giving EY money)? ---------------------------------------- The frame of "saving humanity from certain doom" seems to serve little point other than a cynical way of getting certain varieties of young people excited.
4MichaelVassar12y
As far as I can tell, SI long ago started avoiding that frame because the frame had deleterious effects, but if we wanted to excite anyone, it was ourselves, not other young people.
1gwern12y
Exploring many unusual and controversial ideas? Certainly we get criticized for focusing on things like FAI often enough, it should at least be true!
3IlyaShpitser12y
Saying that you save the world by exploring many unusual and controversial ideas is like saying you save the world by eating ice cream and playing video games.
2Simon Fischer12y
Isn't "exploring many unusual and controversial ideas" what scientists usually do? (Ok, maybe sometimes good scientist do it...) Don't you think that science could contribute to saving the world?
4IlyaShpitser12y
What I am saying is "exploring unusual and controversial ideas" is the fun part of science (along with a whole lot of drudgery). You don't get points for doing fun things you would rather be doing anyways.

Actually, I think you get points for doing things that work, whether they are fun or not.

-1TimS12y
Some of the potentially useful soft sciences research is controversial. But essentially no hard sciences research is both (a) controversial and (b) likely to contribute massive improvement in human well-being. Even something like researching the next generation of nuclear power plants is controversial only in the sense that all funding of basic research is "controversial."
-6Decius12y
0[anonymous]12y
The enormous problem with philosophy problems is this. Philosophy fails a lot, historically. Fails terribly.
[-]Emile12y280

I agree with the gist of this (Robin Hanson expressed similar worries), though it's a bit of a caricature. For example:

people who really like to spend their time arguing about ideas on the Internet have managed to persuade themselves that they can save the entire species from certain doom just by arguing about ideas on the Internet

... is a bit unfair, I don't think most SIAI folk consider "arguing about ideas on the Internet" to be of much help except for recruitment, raising funds, and occasionally solving specific technical problems (like some decision theory stuff). It's just that the "arguing about ideas on the Internet" is a bit more prominent because, well, it's on the Internet :)

Eliezer, specifically, doesn't seem to do much arguing on the internet, though he did do a good deal of explaining his ideas on the Internet, which more thinkers should do. And I don't think many of us folks who chat about interesting things on LessWrong are under any illusion that doing so is Helping Save Mankind From Impending Doom.

2aaronsw12y
Yes, "arguing about ideas on the Internet" is a shorthand for avoiding confrontations with reality (including avoiding difficult engineering problems, avoiding experimental tests of your ideas, etc.).
-3[anonymous]12y
May I refer you to AIXI, which was a potential design for GAI, that was, by these AI researchers, fleshed out mathematically to the point where they could prove it would kill off everyone? If that isn't engineering, then what is programming (writing math that computers understand)?

that was, by these AI researchers, fleshed out mathematically

This was Hutter, Schmidhuber, and so forth. Not anyone at SI.

fleshed out mathematically to the point where they could prove it would kill off everyone?

No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a "delusion box" to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.

BTW, I believe Carl is talking about Ring & Orseau's Delusion, Survival, and Intelligent Agents.

4CarlShulman12y
Yes, thanks.
1Alexandros12y
So if I read correctly, someone at SI (Eliezer, even) had an original insight into cutting-edge AGI research, one strong enough to be accepted by other cutting-edge AGI researchers, and instead of publishing a proof of it, which was trivial, simply gave it away and some students finally proved it? Or were the discoveries independent? Because if it the first, SI let a huge, track-record-building accomplishment slip through its hands. A paper like that alone would do a lot to answer Holden's criticism.
7CarlShulman12y
I'm not sure. If they were connected, it was probably by way of the grapevine via the Schmidhuber/Hutter labs. Meh, people wouldn't have called it huge, and it isn't, particularly. It would have signaled some positive things, but not much.
5timtyler12y
Surely Hutter was aware of this issue back in 2003: * http://www.hutter1.net/ai/aixigentle.pdf

Aaron, I currently place you in the category of "unconstructive critic of SI" (there are constructive critics). Unlike some unconstructive critics, I think you're capable of more, but I'm finding it a little hard to pin down what your criticisms are, even though you've now made three top-level posts and every one of them has contained some criticism of SI or Eliezer for not being fully rational.

Something else that they have in common is that none of them just says "SI is doing this wrong". The current post says "Here is my cynical explanation for why SI is doing this thing that I say is wrong". (Robin Hanson sometimes does this - introduces a new idea, then jumps to "cynical" conclusions about humanity because they haven't already thought of the idea and adopted it - and it's very annoying.) The other two posts introduce the criticisms in the guise of offering general advice on how to be rational: "Here is a rationality mistake that people make; by coincidence, my major example involves the founder of the rationality website where I'm posting this advice."

I suggest, first of all, that if your objective on this site is to give advi... (read more)

Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.

For what it's worth, I would personally be much happier if I didn't have to worry about FAI and could just do stuff that I found the most enjoyable. I also don't think that the work I do for SI has a very high chance of actually saving the world, though it's better than doing nothing.

I do consider the Singularity Institute a great employer, though, and it provided me a source of income at a time when I was desperately starting to need one. But that happened long after I'd already developed an interest in these matters.

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

What is the "outside view" on how much of an existential risk asteroids are? You know, the one you get when you look at how often asteroid impacts at or near the level that can cause mass extinctions happen? Answer: very damn low.

"The Outside View" isn't just a slogan you can chant to automatically win an argument. Despite the observational evidence from common usage the phrase doesn't mean "Wow! You guys who disagree with me are nerds. Sophisticated people think like I do. If you want to be cool you should agree with me to". No, you actually have to look at... (read more)

7DaFranker12y
@aaronsw: I'd like to reinforce this point. If it isn't hard work, please point us all at whatever solution any random mathematician and/or programmer could come up with on how to concretely implement Löb's Theorem within an AI to self-prove that a modification will not cause systematic breakdown or change the AI's behavior in an unexpected (most likely fatal to the human race, if you randomize through all conceptspace for possible eventualities, which is very much the best guess we have at the current state of research) manner. I've yet to see any example of such an application to a level anywhere near this complex in any field of physics, computing or philosophy. Or maybe you could, instead, prove that there exists Method X that is optimal for the future of the human race which guarantees that for all possible subsets of "future humans", there exists no possible subsets which contain any human matching the condition "sufficiently irrational yet competent to build the most dangerous form of AI possible". I mean, I for one find all this stuff about provability theory way too complicated. Please show us the easy-work stay-in-bed version, if you're so sure that that's all there is to it. You must have a lot of evidence to be this confident. All I've seen so far is "I'm being skeptic, also I might have evidence that I'm not telling you, so X is wrong and Y must be true!"

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further.

This is an unsubstantiated assertion presented with the form of something that should be conclusive. This is bizarre since the SIAI position on Tool AI is not a particular weak point in the SIAI position and the referenced conversation doesn't indicate any withdrawal from reality.

The actual people at SIAI are much less prone to this than the community.

When I was living in San Francisco, people would regularly discuss various experiments that they were running on themselves, or skills that they were practicing. If I tried to assert something without concrete examples or predictions, people would be skeptical.

OK, I believe we have more than enough information to consider him identified now:

  • Dmytry
  • private_messaging
  • JaneQ
  • Comment
  • Shrink
  • All_work_and_no_play

Those are the currently known sockpuppets of Dmytry. This one warrants no further benefit of the doubt. It is a known troll wilfully abusing the system. To put it mildly, this is something I would prefer not to see encouraged.

[-]gwern12y140

I agree. Dmytry was OK; private_messaging was borderline but he did admit to it and I'm loathe to support the banning of a critical person who is above the level of profanity and does occasionally make good points; JaneQ was unacceptable, but starting Comment after JaneQ was found out is even more unacceptable. Especially when none of the accounts were banned in the first place! (Were this Wikipedia, I don't think anyone would have any doubts about how to deal with an editor abusing multiple socks.)

9wedrifid12y
Absolutely, and he also stopped using Dmytry. My sockpuppet aversion doesn't necessarily have a problem with abandoning one identity (for reasons such as the identity being humiliated) and working to establish a new one. Private_messaging earned a "Do Not Feed!" tag itself through consistent trolling but that's a whole different issue to sockpuppet abuse. And even used in the same argument as his other account, with them supporting each other!
5Kawoomba12y
What does it matter what his motives are, ulterior (trolling) as they may be, as long as he raises salient points and/or provides at least thought-provoking insights with an acceptable ratio? If I were to try to construct some repertoire model of him (e.g. signalling intellectual superiority by contradicting the alphas, seems like a standard contrarian mindset), it might be a good match. But frankly: Why care, his points should stand or fall on their own merit, regardless of why he chose to make them. He raised some excellent points regarding e.g. Solomonoff induction that I've yet to see answered, (e.g. accepting simple models with assumed noise over complex models with assumed less noise, given the enourmously punishing discounting for length that may only work out in theoretical complexity class calculations and Monte Carlo approximations with a trivial solution) and while this is a CS dominated audience, additional math proficiency should be highly sought after -- especially for contrarians, since it makes their criticisms that much more valuable. Is he a consistent fountain of wisdom? No. Is anyone? I will not defend sockpuppet abuse here, though, that's a different issue and one I can get behind. Don't take this comment personal, the sentiment was spawned from when he just had 2 known accounts but was already met with high levels of "do not feed!", your comment just now seemed as good a place as any to voice it.
6Wei Dai12y
Can you link to the original post or comment? Your restatement of whatever he wrote is not making much sense to me.
2Kawoomba12y
Well there is definitely some sort of a Will Newsome-like projection technique going on, i.e. his comments - those that are on topic - are sometimes sufficiently opaque so that the insight is generated by the reader filling in the gaps meaningfully. The example I used was somewhat implicit in this comment: The universal prior discount for length is so severe (just a 20 bits longer description = 2^20 discounting, and what can you even say with 20 bits?), that this quote from Shane Legg's paper comes at little surprise: If the hypotheses allowed for some margin of error when checking for the shortest programs (and they should when applied to across a map-territory divide), it might very well stop at such a crackpot program that assumes all the mismatch may just be errors in the sense data. How well does that argument hold up to challenges? I'm not sure, I haven't thought AIXI sufficiently through when taking into account the map-territory divide. But it sure is worthy of further consideration, which it did not get. Here's some other comments that come to mind: This comment of his, which I interpreted to essentially refer to what I explained in my answering comment. There's a variation of that point in this comment, third paragraph. He also linked to this marvelous presentation by Marcus Hutter in another comment, which (the presentation) unfortunately did not get the attention it clearly deserves. There's comments I don't quite understand on first reading, but which clearly go into the actual meat of the topic, which is a good direction. My perspective is this: As long as he provides posts like those over a period of just a few weeks, I do not care about his destructive attitude, or his interspersed troll comments. That which can be killed by truth should be, this aphorism still holds true for me when substituting "truth" for "meaningful argument". Those deserve answers, not ignoring, regardless of their source.
4Wei Dai12y
It looks to me like you're reading your own interpretation into what he wrote, because the sentence he wrote before "You end up with" was which is clearly talking about another issue. I can give my views on both if you're interested. On the issue private_messaging raises, I think it's a serious philosophical problem, but not necessarily a practical one (as he claims), assuming Solomonoff Induction could be made practical in the first place, because the hypothetical AI could quickly update away even a factor of 2^1000 when it turns on its senses, before it has a chance to make any important wrong decisions. private_messaging seems to have strong intuitions that it will be a practical problem, but he tends to be overconfident in many areas so I don't trust that too much. On the issue you raised, a hypothesis of "simple model + random errors" must still match the past history perfectly to not be discarded, and the exact errors would have to be part of the hypothesis (i.e., program) and therefore count towards its length. I defended private_messaging/Dmytry before for similar reasons, but the problem is that it's often not fun to argue with him. I do engage with him sometimes if I think I can draw out some additional insights or get him to clarify something, but now I tend not to respond just to correct something that I think is wrong.
1private_messaging12y
Are you picturing AI that has simulated multiverse from big bang up inside a single universe, and then it just uses camera sense data to very rapidly pick the right universe? Well yes that will dispose of 2^1000 prior very easily. Something that is instead e.g. modelling humans using minimum amount of guessing without knowing what's inside their heads, and which can't really run any reductionist simulations at the level of quarks to predict it's camera data, can have real trouble getting right the fine details of it's grand unified theory of everything, and most closely approximate a crackpot scientist. Furthermore, having to include a non-reductionist model of humans, it may even end up religious (feeding stuff into human mind model to build it's theory of everything by intelligent design). How it would work under any form of a practical bound (e.g. forbidding zillions upon zillions of the quark level simulations of everything from big bang to now to occur within an AI, which seem to me like a very conservative bound), is a highly complicated open problem. edit: and the very strong intuition I have is that you can't just dismiss this sort of stuff out of hand. So many ways it can fail. So few ways it can work great. And no rigour what so ever in the speculations here.
2Wei Dai12y
I certainly don't disagree when you put it like that, but I think the convention around here is when we say "SI/AIXI will do X" we are usually referring to the theoretical (uncomputable) construct, not predicting that an actual future AI inspired by SI/AIXI will do X (in part because we do recognize the difficulty of this latter problem). The reason for saying "SI/AIXI will do X" may for example be to point out how even a simple theoretical model can behave in potentially dangerous ways that its designer didn't expect, or just to better understand what it might mean to be ideally rational.
2Vladimir_Nesov12y
Solomonoff induction never ignores observations.
2Kawoomba12y
One liners, eh? It's not so much ignoring observations as testing models that allow for your sense data to be subject to both gaussian noise as well as systematic errors, i.e. explaining part of the observations as sensory fuzziness. In such a case, an overly simply model that posits e.g. some systematic error in its sensors may have an advantage over an actually correct albeit more complex model, due to the way that the length penalty for the Universal Prior rapidly accumulates. Imagine AIXI coming to the conclusion that the string it is watching is in fact partly output by a random string generator that intermittently takes over. If the competing (but potentially correct) model that works without such a random string generator needs just a megabit more space to specify, do the math. I'll still have to think upon it further. It's just not something to be dismissed out of hand, and just one of several highly relevant tangents (since it pertains to real world applicability; if its a design byproduct it might well translate to any Monte Carlo or assorted formulations). It might well turn out to be a non-issue.
0roystgnr12y
Does AIXI admit the possibility of random string generators? IIRC it only allows deterministic programs, so if it sees patterns a simple model can't match, then it's forced to update the model with "but there are exceptions: bit N is 1, and bit N+1 is 1, and bit N+2 is 0... etc" to account for the error. In other words, the size of the "simple model" then grows to be the size of the deterministic part plus the size of the error correction part. And in that case, even a megabyte of additional complexity in a model would stop effectively ruling out that complex model just as soon as more than a couple megabytes of simple-model-incompatible data had been seen.
0[anonymous]12y
Nesov is right.
0FeepingCreature12y
IANAE, but doesn't AIXI work based on prediction instead of explanation? An algorithm that attempts to "explain away" sense data will be unable to predict the next sequence of the AI's input, and will be discarded.
1Kawoomba12y
If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn't deterministic, at least not on a level that your sense data can pick up - both of which cannot be avoided - then perfect predictability is out of the question anyways. The problem then shifts to "how much error or fuzziness of the sense data or the underlying world is allowed", at which point there's a trade-off between "short and enourmously more preferred model that predicts more errors/fuzziness" versus "longer and enourmously less preferred model that predicts less errors/fuzziness". This is as far as I know not an often discussed topic, at least not around here, probably because people haven't yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.
6Vladimir_Nesov12y
The fallacy of gray.
1Kawoomba12y
An uncharitable reading, notice the "consistent" and referring to an acceptable ratio of (implied) signal/noise in the very first sentence. Also, this may be biased, but I value relevant comments on algorithmic information theory particularly highly, and they are a rare enough commodity. We probably agree on that at least.
3wedrifid12y
Exactly. I often lament that the word 'troll' contains motive as part of the meaning. I often try to avoid the word and convey "Account to which Do Not Feed needs to be applied" without making any assertion about motive. Those are hard to prove. As far as I'm concerned if it smells like a troll, has amazing regenerative powers, creates a second self when attacked and loses a limb and goes around damaging things I care about then it can be treated like a troll. I care very little whether it is trying to rampage around and destroy things---I just want to stop it.
-1[anonymous]12y
I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses (if I think you guys are the Scientology 2.0 I want to see if I can falsify that, ok?), though at some point I was really curious to see what you do about two accounts in same place talking in exact same style, that was entirely unscientific, sorry about this. Furthermore, the comments were predominantly rated at >0 and not through socks rating each other up (I would want to see if first-vote effect is strong but that would require far too much data). Sorry if there is any sort of disruption to anything. I actually have significantly more respect for you guys now, with regards to considering the commentary, and subsequently non cultness. I needed a way to test hypotheses. That utterly requires some degree of statistical independence. I do still honestly think this FAI idea is pretty damn misguided (and potentially dangerous to boot), but I am allowing it much more benefit of the doubt. edit: actually, can you reset the email of Dmytry to dmytryl at gmail ? I may want to post article sometime in the future (I will try to offer balanced overview as I see, and it will have plus points as well. Seriously.). Also, on the Eliezer, I really hate his style but like his honesty, and its a very mixed feeling all around, i mean, its atrocious to just go ahead and say, whoever didn't get my MWI stuff is stupid, thats the sort of stuff that evaporates out a LOT of people, and if you e.g. make some mistakes, you risk evaporating meticulous people. On the other hand, if that's what he feels, that's what he feels, to conceal it is evil.
[-]gwern12y130

I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses

So presumably we can expect a post soon explaining the background & procedure, giving data and perhaps predictions or hash precommitments, with an analysis of the results; all of which will also demonstrate that this is not a post hoc excuse.

edit: actually, can you reset the email of Dmytry to dmytryl at gmail ?

I can't, no. I'd guess you'd have to ask someone at Trike, and I don't know if they'd be willing to help you out...

3[anonymous]12y
Well basically I did expect much more negative ratings, and then I'd just stop posting on those. I couldn't actually set up proper study without zillion socks, and that'd be serious abuse. I am currently quite sure you guys are not Eliezer cult. You might be a bit of an idea cult but not terribly much. edit: Also as you guys are not Eliezer cult, and as he actually IS pretty damn good at talking people into silly stuff, in so much it is also evidence he's not building a cult. re: email address, doesn't matter too much. edit: Anyhow, I hope you do consider content of the comments to be of the benefit, actually I think you do. E.g. my comment against the idea of overcoming some biases, I finally nailed what bugs me so much about the 'overcomingbias' title and the carried-over cached concept of overcoming them. edit: do you want me to delete all socks? No problem either way.
2CarlShulman11y
One more: http://lesswrong.com/user/All_work_and_no_play/
3gwern11y
Agree; that's either Dmytry or someone deliberately imitating him.
2CarlShulman12y
And here's one more (judging by content, style, and similar linguistic issues): Shrink. Also posting in the same discussions as private_messaging.
0gwern12y
It certainly does sound like him, although I didn't notice any of his most obvious tells like ghmm or obsession with complexity of updating Bayesian networks.
2CarlShulman12y
"For the risk estimate per se" "The rationality and intelligence are not precisely same thing." "To clarify, the justice is not about the beliefs held by the person." "The honesty is elusive matter, " Characteristic misuse of "the." "You can choose any place better than average - physicsforums, gamedev.net, stackexchange, arstechnica observatory," Favorite forums from other accounts.
2gwern12y
Ah yes, I forgot Dymtry had tried discussing LW on the Ars forums (and claiming we endorsed terrorism, etc. He got shut down pretty well by the other users.) Yeah, how likely is it that they would both like Ars forums...
0wedrifid12y
He did open by criticising many worlds and in subsequent posts had an anti LW and SIAI chip on his shoulder that couldn't plausibly have been developed in the time from the account had existed.
0wedrifid12y
Well spotted. I hadn't even noticed the Shrink account existing, much less identified it by the content. Looking at the comment history I agree it seems overwhelmingly likely.
0[anonymous]11y
Huh, I didn't see this whole conversation before. Will update appropriately.

You are using testosterone to boost performance, it also clouds social judgment severely, and in so much as I know it, I can use it to dismiss literally anything you say (hard to resist temptation to, at times).

Counter incremented.

[-][anonymous]12y110

People at the upper end of the IQ spectrum get lonely for someone smarter to talk to. It's an emotional need for an intellectual peer.

3David_Gerard12y
Lots of really smart people here is something I find very attractive about LW. (Of course, smart and stupid are orthogonal, and no-one does stupid quite as amazingly as really smart people.)
0Arkanj3l12y
That doesn't privilege FAI, methinks, and seems too charitable as an after-the-fact explanation with not so much as a survey.

My cynical take on FAI is that it's a goal to organize one's life around which isn't already in the hands of professionals. I have no idea whether this is fair.

2private_messaging12y
There's also the belief that UFAI is in the hands of professionals... and that professionals miss some big picture insights that you could make without even knowing the specifics of the cognitive architecture of the AI, etc.
1jsalvatier12y
I couldn't parse "it's a goal to organize one's life around which isn't already in the hands of professionals".

For people who are looking for a big goal so that their lives make sense, FAI is a project where it's possible to stake out territory relatively easily.

3arundelo12y
"it's (a goal to organize one's life around) which isn't already in the hands of professionals" = "it's a goal around which to organize one's life that isn't already in the hands of professionals"
3Decius12y
It is a goal. It is not already in the hands of professionals. It is something around which one can organize one's life. It's hard to clear the ambiguity of whether it is one's life that isn't in the hands of professionals.

But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

IMO, a signalling explanation makes more sense. Publicly-expressed concern about moral issues signals to others what a fine fellow you are. In that context the more far-out things you care about the better. Trees, whales, simulated people, distant descendants - they all signal how much you care.

[-]torekp12y170

If I want to signal how much I care, I'll stick with puppies or local soup kitchens, thank you very much. That will get me a lot more warm fuzzies - and respect - from my neighbors and colleagues than making hay about a robot apocalypse.

Humans are adaptation-executers, not fitness maximisers - and evolved in tribes of not more than 100 or so. And they are exquisitely sensitive to status. As such, they will happily work way too hard to increase their status ranking in a small group, whether it makes sense from the outside view or not. (This may or may not follow failing to increase their status ranking in more mainstream groups.)

If you want to maximize respect from a broad, nonspecific community (e.g. neighbors and colleagues), that's a good strategy. If you want to maximize respect from a particular subculture, you could do better with a more specific strategy. For example, to impress your political allies, worry about upcoming elections. To impress members of your alumni organization, worry about the state of your sports team or the university president's competence. To impress folks on LessWrong, worry about a robot apocalypse.

6A1987dM12y
That's a fully general argument: to impress [people who care about X], worry about [X]. But it doesn't explain why for rationalists X equals a robot apocalypse as opposed to [something else].
8ModusPonies12y
My best guess is that it started because Eliezer worries about a robot apocalypse, and he's got the highest status around here. By now, a bunch of other respected community members are also worried about FAI, so it's about affiliating with a whole high-status group rather than imitating a single leader.
6Jonathan_Graehl12y
I wouldn't have listened to EY if he weren't originally talking about AI. I realize others' EY origin stories may differ (e.g. HPMOR).
2timtyler12y
Much depends on who you are trying to impress. Around here, lavishing care on cute puppies won't earn you much status or respect at all.
0ShardPhoenix12y
That raises the question of why people care about getting status from Less Wrong in the first place. There are many other more prominent internet communities.
7timtyler12y
Other types of apocalyptic phyg also acquire followers without being especially prominent. Basically the internet has a long tail - offering many special interest groups space to exist.
1DanielLC12y
Yeah, but how much respect will they get you from LessWrong?
[-]Malo12y70

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

I think the last sentence here is a big leap. Why is this a more plausible explanation then the idea that aspiring rationalist simply find AI-risk and FAI compelling. Furthermore, since this community was founded by someone who is deeply interested in both... (read more)

Scarcely the most cynical conceivable explanation. Here, try this one:

"Yes," declaimed Deep Thought, "I said I'd have to think about it, didn't I? And it occurs to me that running a programme like this is bound to create an enormous amount of popular publicity for the whole area of philosophy in general. Everyone's going to have their own theories about what answer I'm eventually to come up with, and who better to capitalize on that media market than you yourself? So long as you can keep disagreeing with each other violently enough and sla

... (read more)
3vi21maobk9vp12y
I guess there was an implied aditional limitation of being well-meaning on the consious level.

Not sure about your sampling method, but a lot of LWers I know (in NY area) are pretty busy "doing stuff". Propensity to doing stuff does not seem to negatively correlate with FAI concerns as far as I can tell.

That said this is a bit of a concern for me as a donor, which is why I think the recent increase in transparency and spinning off CFAR is a big positive sign: either the organization is going to be doing stuff in the FAI area (I consider verifiable research doing stuff, and I don't think you can do it all in bed) or not, it's going to be clear either way.

Keep in mind that real sock puppeteering is about making a strawman sock puppet, or a sock puppet that disagrees cleverly using existing argument you got but tentatively changes the view, or the like.

Sock puppets, both here and on Reddit or Wikipedia, can be used for multiple purposes, not just that.

One reason I have respect for Eliezer is HPMOR-- there's a huge amount of fan fiction, and writing something which impresses both a lot of people who like fan fiction and a lot of people who don't like fan fiction is no small achievement.

Also, it's the only story I know of which gets away with such huge shifts in emotional tone. (This may be considered a request recommendations of other comparable works.)

Furthermore, Eliezer has done a good bit to convince people to think clearly about what they're doing, and sometimes even to make useful changes in their lives as a result.

I'm less sure that he's right about FAI, but those two alone are enough to make for respect.

7[anonymous]12y
In the context of LessWrong and FAI, Yudkowsky's fiction writing abilities are almost entirely irrelevant.
0Bruno_Coelho12y
This is a source of disagreement. Think cleary and change behavior" is not a good slogan, is used for numerous groups. But-- and the inferential distance with is not clear from begining-- there are lateral beliefs: computational epistemology, especificity, humans as impefect machines etc. In a broad context, even education in general could fit this phrase, specially for people with no training in gathering data.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

If you want to do either, you have to do work, get paid, and donate to the appropriate charity. The only difference is where you donate.

I can't seem to find a charity for asteroid impact avoidance. As far as I can tell, everything going into that is done by g... (read more)

[-][anonymous]12y50

it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

What? really? I happen to have drunk the kool-aid and think this is an important problem. So it is very important to me that you tell me how it is I could solve FAI without doing any real work.

I was all prepared to vote this up from '"If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?'." But then you had to go and be wrong - suggesting some sort of "lazy ideas only" search process that makes no sense historically, and conflating LW and SI.

5aaronsw12y
Can you point to something I said that's you think is wrong? My understanding of the history (from reading an interview with Eliezer) is that Eliezer concluded the singularity was the most important thing to work on and then decided the best way to get other people to work on it was to improve their general rationality. But whether that's true or not, I don't see how that's inconsistent with the notion that Eliezer and a bunch of people similar to him are suffering from motivated reasoning. I also don't see how I conflated LW and SI. I said many LW readers worry about UFAI and that SI has taken the position that the best way to address this worry is to do philosophy.
0Manfred12y
You're right that you can interpret FAI as motivated reasoning. I guess I should have considered alternate interpretations more. Well, kinda. Eliezer concluded the singularity was the most important thing to work on and then decided the best way to work on it was to code an AI as fast as possible, with no particular regard for safety. "[...] arguing about ideas on the internet" is what I was thinking of. It's a LW-describing sentence in a non-LW-related area. Oh, and "Why rationalists worry about FAI" rather than "Why SI worries about FAI."
3aaronsw12y
Two people have been confused by the "arguing about ideas" phrase, so I changed it to "thinking about ideas".
2Manfred12y
It's more polite, and usually more accurate, to say "I sent a message I didn't want to, so I changed X to Y."
1Decius12y
Most accurate would be "feedback indicates that a message was received that I didn't intend to send, so..."
0NancyLebovitz12y
Maybe. So far as I know, averting asteroids doesn't have as good a writer to inspire people.
[-][anonymous]12y40

actually, let it be the last post here, I get dragged out any time I resolve to leave then check if anyone messaged me.

A two hour self-imposed exile! I think that beats even XiXiDu's record.

LessWrong rationality nerds cared so much about creating Friendly AI

I don't!

5[anonymous]12y
Ditto, but not really OP's point.
3aaronsw12y
Right. I tweaked the sentence to make this more clear.

btw, if you're aaronsw in my Twitter feed, welcome to LessWrong^2

What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".

What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".

Some of it yes. At the end of the day though, some of it does lead to real experiments, which need to pay rent. And some of it does quite well at that. Look for example at the recent discovery of the Higgs boson.

8betterthanwell12y
These theoretical physicists had to argue for several decades until they managed to argue themselves into enough money to hire the thousands of people to design, build and operate a machine that was capable of refuting, or as it turned out - supporting their well motivated hypothesis. Not to mention that the machine necessitated inventing the world wide web, advancing experimental technologies, data processing, and fields too numerous to mention by orders of magnitude compared to what was available at the time. Perhaps today's theoretical programmers working on some form of General Artificial Intelligence find themselves faced with comparable challenges. I don't know how things must have looked like at the time, perhaps people were wildly optimistic with respect to expected mass of the scalar boson(s) of the (now) Standard Model of physics, but in hindsight, it seems pretty safe to say that the Higgs boson must have been quite impossible for Humanity to experimentally detect back in 1964. Irrefutable metaphysics. Just like string theory, right? Well, thousands upon thousands of people, billions of dollars, some directly but mostly indirectly (in semiconductors, superconductors, networking, ultra high vacuum technology, etc.) somehow made the impossible... unimpossible. And as of last week, we can finally say they succeeded. It's pretty impressive, if nothing else. Perhaps M-theory will be forever irrefutable metaphysics to mere humans, perhaps GAI. As Brian Greene put it: "You can't teach general relativity to a cat." Yet perhaps we shall see further (now) impossible discoveries made in our lifetimes.
2aaronsw12y
There's nothing wrong with arguing on the Internet. I'm merely asking whether the belief that "arguing on the Internet is the most important thing anyone can do to help people" is the result of motivated reasoning.
9David_Gerard12y
The argument I see is that donating money to SIAI is the most important thing anyone can do to help people.
5CarlShulman12y
Even if one thought SIAI was the most effective charity one could donate to at the present margin right now, or could realistically locate soon, this would not be true. For instance, if one was extremely smart and effective at CS research, then better to develop one's skills and take a crack at finding fruitful lines of research that would differentially promote good AI outcomes. Or if one was extremely good at organization and management, especially scholarly management, to create other institutions attacking the problems SIAI is working on more efficiently. A good social scientist or statistician or philosopher could go work at the FHI, or the new Cambridge center on existential risks as an academic. One could make a systematic effort to assess existential risks, GiveWell style, as some folk at the CEA are doing. There are many people whose abilities, temperament, and background differentially suit them to do X better than paying for others to do X.

Ok, if what you're saying is not "SI concludes this" but just that we don't really know what even the theoretical SI concludes, then I don't disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don't criticize them heavily based on this) because it doesn't seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to "physics sim + locat... (read more)

But an interest in rationality pulls in expertise transferable to all manner of fields, e.g. the 2011 survey result showing 56.5% agreeing with the MWI. (I certainly hope the next survey will ask how many of those saying they agree or disagree with it can solve the Schroedinger equatlon for a hydrogen atom, and also how many of those expressing a position would understand the solution to the Schroedinger equatlon for a hydrogen atom if they saw it written out.) So acquiring meaningful knowledge of artificial intelligence is par for the course.

[-]gwern12y100

57%, incidentally, is almost exactly equal to the results of one poll of cosmologists/physicists.

0CarlShulman12y
Come on, you can't just pick the most extreme of varied poll results without giving context.
7gwern12y
Of course I can, just like David can engage in unfair snark about what a number on a poll might mean.
0HBDfan12y
[delete]
-2David_Gerard12y
Could you please clarify what aspect you felt was unfair?
1gwern12y
Perhaps you could first unpack your implicit unstated argument from random poll number to sarcastic remarks about not being physicists, so no one winds up criticizing something you then say you didn't mean.
-1David_Gerard12y
So ask the question next survey. I do, however, strongly suspect they're expressing an opinion on something they don't actually understand - and I don't think that's an unfair assumption, given most people don't - which would imply they were only doing so because "believe in MWI" is a local trope. So which bit was unfair?
3ArisKatsaris12y
Since our certainty was given as a percentage, none of us said we agreed or disagreed with it in the survey, unless you define "agree" as certainty > 50% and "disagree" as certainty below 50% Or are you saying that we should default to 50%, in all cases we aren't scientifically qualified to answer of our own strength? That has obvious problems. That's like asking people to explain how consciousness works before they express their belief in the existence of brains, or their disbelief in the existence of ghosts.
3gwern12y
What is "actually understand" here and why does it sound like a dichotomy? Are you arguing that one cannot have any opinion about MWI based on any amount of understanding derived from popularizations (Eliezer-written or otherwise) which falls short of one being able to solve technical problems you list? Surely you don't believe that one is not allowed to hold any opinion or confidence levels without becoming a full-fledged domain expert, but that does sound like what your argument is.
1David_Gerard12y
Given that the MWI is claimed to follow by just taking the equations seriously, then I think understanding the equations in question is not an unreasonable prerequisite to having a meaningful opinion on that.
2OrphanWilde12y
Your line of argument could equally apply to quantum physicists.

I'm not sure about the community at large, but as for some people, like Eliezer, they have very good reasons for why working on FAI actually makes the most sense for them to do, and they've gone to great lengths to explain this. So if you want to limit your cynicism to "armchair rationalists" in the community, fine, but I certainly don't think this extends to the actual pros.

Also, I only thought of that when you would go on how I must feel so humiliated by some comment of yours

(I did not make such a claim, nor would I make one.)

[-][anonymous]12y-10

SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

I think it is more that you can't build a genuine General Intelligence before you have solved some intractable mathematical problems, like pr... (read more)

1CarlShulman12y
Why think that you need to use a Solomonoff Induction (SI) approximation to get AGI? Do you mean to take it so loosely that any ability to do wide-ranging sequence prediction count as a practical algorithm for SI?
-1[anonymous]12y
Well, solomonoff induction is the general principle of occamian priors in a mathematically simple universe. I would say that "wide-ranging sequence prediction" would mean you had already solved it with some elegant algorithm. I highly doubt something as difficult as AGI can be achieved with hacks alone.
1Dolores198412y
But, humans can't reliably do that, either, and we get by okay. I mean, it'll need to be solved at some point, but we know for sure that something at least human equivalent can exist without solving that particular problem.
0[anonymous]12y
Humans can't reliably do what? When I say information sequence prediction, i mean not some abstract and strange mathematics. I mean predicting your sensory experiences with the help of your mental world model, when you see a glass get brushed off the table, you expect to see the glass to fall off the table and down onto the floor. You expect exactly because your prior over your sensory organs includes there being a high correlation between your visual impressions and the state of the external world, and because your prior over the external world predicts things like gravity and the glass being affected thereby. From the inside it seems as if glasses fall down when brushed off the table, but that is the Mind Projection Fallacy. You only ever get information from the external world through your senses, and you only ever affect it through your motor-cortex's interaction with your bio-kinetic system of muscle, bone and sinew. Human brains are one hell of a really powerful prediction engine.
0Dolores198412y
So... you just mean that in order to build AI, we're going to have to solve AI, and it's hard? I'm not sure the weakened version you're stating here is useful. We certainly don't have to actually, formally solve the SI problem in order to build AI.
0[anonymous]12y
I really doubt an AI-like hack even looks like one, if you don't arrive on it by way of maths. I am saying it is statistically unlikely to get GAI without maths, and a thermodynamic miracle to get FAI without math. However, my personal intuits are the GAI isn't as hard as, say, some of the other intractable problems we know of, like P =? NP, the Reimann Hypothesis, and other famous problems. Only Uploads offer a true alternative.
[-][anonymous]12y-10

I would expect it to also attract unusually high percentage of narcissists .

3NancyLebovitz12y
Why?
-4[anonymous]12y
Grandiosity, belief in own special importance, etc. Narcissists are pretty common, people capable of grand contributions are very rare, so the majority of people who think they are capable of grand contributions got to be narcissists. Speaking of which, Yudkowsky making a friendly AI? Are you frigging kidding me? I came here through the link to guy's quantum ramblings, which are anything but friendly.
2NancyLebovitz12y
Eliezer argues that a lot of people are more capable than they permit themselves to be, which doesn't seem very narcissistic to me.
-3[anonymous]12y
From your link: Nope. Crackpots compare themselves to Einstein because: [Albeit I do like his straight in your face honesty.] It's not about choosing 'important' problem, it's about choosing solvable important problem, and a method of solving it, and intelligence helps, while unintelligent people just pick some idea out of science fiction or something, and can't imagine that some people can do better. Had it really been that choosing right problems and approaches was matter of luck, we would observe far fewer cases where a single individual has many important insights, the distribution of insights per person would be different. edit: The irony here is quite intense. Surely a person who's into science fiction will have the first "cache hit" be something science fictional, and then the first "cache hit" for the solution path be something likewise science fictional. Also a person into reading about computers will have first "cache hit" for describing the priming be reference to "cache".
2David_Gerard12y
Richard Hamming also makes this point.
4[anonymous]12y
Thanks. He says it much more better than I could. He speaks of importance of small problems. Speaking of which, one thing geniuses do is generate the right problems for themselves, not just choose from already formulated. Science fiction is full of artificial minds, good and evil. It has minds improving themselves, and a plenty of Frankensteins of all kinds. It doesn't have things like 'a very efficient universal algorithm that given mathematical description of a system and constraints finds values for free parameters that meet constraints', because it is not a plot device. Fiction does not have wolfram alpha in 2010. It has Hal in 2000 . Fiction shuns merely useful in favor of interesting. I would be very surprised if the solution would be among the fictional set. The fictional set is as good place to look in as any, yes, but it is small. edit: On second thought, what I mean is that it would be very bad to be either inspired or 'de-spired' by fiction to any significant extent.
2HBDfan12y
Yes: Humans think in stories but there are far far more concepts that do not make good story than do make it.
-2wedrifid12y
Narcissists are usually better at seeking out situations that give them power, status and respect or at least money.
3[anonymous]12y
~1% of people through all the social classes are usually better at this, you say? I don't think so. Narcissists seek narcissistic supply. Most find it in delusions.
0wedrifid12y
No, I didn't. (Although now that you mention it I'd comfortably say that more than 70% of people through all the social classes are usually better at this. It's kind of fundamental a human talent.)

Suppose there was a suspicion 2..3 people with particularly strong view just decided to pick on your account to downvote? (from back when it was Dmytry) How do you actually check that?

Or, you know, get over it. It's just karma!

1private_messaging12y
Comments are not read (and in general poorly interpreted) when at negative.
[+][anonymous]12y-50