Let's take the outside view for a second. After all, if you want to save the planet from AIs, you have to do a lot of thinking! You have to learn all sorts of stuff and prove it and just generally solve a lot of eye-crossing philosophy problems which just read like slippery bullshit. But if you want to save the planet from asteroids, you can conveniently do the whole thing without ever leaving your own field and applying all the existing engineering and astronomy techniques. Why, you even found a justification for NASA continuing to exist (and larding out pork all over the country) and better yet, for the nuclear weapons program to be funded even more (after all, what do you think you'll be doing when the Shuttle gets there?).
Obviously, this isn't any sort of proof that anti-asteroid programs are worthless self-interested rent-seeking government pork.
But it sure does seem suspicious that continuing business as usual to the tune of billions can save the entire species from certain doom.
I think you're missing the point; I actually do think NASA is one of the best organizations to handle anti-asteroid missions and nukes are a vital tool since the more gradual techniques may well take more time than we have.
Your application of cynicism proves everything, and so proves nothing. Every strategy can be - rightly - pointed out to benefit some group and disadvantage some other group.
The only time this wouldn't apply is if someone claiming a particular risk is higher than estimated and was doing absolutely nothing about it whatsoever and so couldn't benefit from attempts to address it. And in that case, one would be vastly more justified in discounting them because they themselves don't seem to actually believe it rather than believing them because this particular use of Outside View doesn't penalize them.
(Or to put it another more philosophical way: what sort of agent believes that X is a valuable problem to work on, and also doesn't believe that whatever Y approach he is taking is the best approach for him to be taking? One can of course believe that there are better approaches for other people - 'if I were a mathematical genius, I could be making more progress on FAI t...
My suspicion isn't because the recommended strategy has some benefits, it's because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn't require us to do anything particularly hard. What's suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.
I agree with the gist of this (Robin Hanson expressed similar worries), though it's a bit of a caricature. For example:
people who really like to spend their time arguing about ideas on the Internet have managed to persuade themselves that they can save the entire species from certain doom just by arguing about ideas on the Internet
... is a bit unfair, I don't think most SIAI folk consider "arguing about ideas on the Internet" to be of much help except for recruitment, raising funds, and occasionally solving specific technical problems (like some decision theory stuff). It's just that the "arguing about ideas on the Internet" is a bit more prominent because, well, it's on the Internet :)
Eliezer, specifically, doesn't seem to do much arguing on the internet, though he did do a good deal of explaining his ideas on the Internet, which more thinkers should do. And I don't think many of us folks who chat about interesting things on LessWrong are under any illusion that doing so is Helping Save Mankind From Impending Doom.
that was, by these AI researchers, fleshed out mathematically
This was Hutter, Schmidhuber, and so forth. Not anyone at SI.
fleshed out mathematically to the point where they could prove it would kill off everyone?
No one has offered a proof of what real-world embedded AIXI implementations would do. The informal argument that AIXI would accept a "delusion box" to give itself maximal sensory reward was made by Eliezer a while ago, and convinced the AIXI originators. But the first (trivial) formal proofs related to that were made by some other researchers (I think former students of the AIXI originators) and presented at AGI-11.
BTW, I believe Carl is talking about Ring & Orseau's Delusion, Survival, and Intelligent Agents.
Aaron, I currently place you in the category of "unconstructive critic of SI" (there are constructive critics). Unlike some unconstructive critics, I think you're capable of more, but I'm finding it a little hard to pin down what your criticisms are, even though you've now made three top-level posts and every one of them has contained some criticism of SI or Eliezer for not being fully rational.
Something else that they have in common is that none of them just says "SI is doing this wrong". The current post says "Here is my cynical explanation for why SI is doing this thing that I say is wrong". (Robin Hanson sometimes does this - introduces a new idea, then jumps to "cynical" conclusions about humanity because they haven't already thought of the idea and adopted it - and it's very annoying.) The other two posts introduce the criticisms in the guise of offering general advice on how to be rational: "Here is a rationality mistake that people make; by coincidence, my major example involves the founder of the rationality website where I'm posting this advice."
I suggest, first of all, that if your objective on this site is to give advi...
Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.
For what it's worth, I would personally be much happier if I didn't have to worry about FAI and could just do stuff that I found the most enjoyable. I also don't think that the work I do for SI has a very high chance of actually saving the world, though it's better than doing nothing.
I do consider the Singularity Institute a great employer, though, and it provided me a source of income at a time when I was desperately starting to need one. But that happened long after I'd already developed an interest in these matters.
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
What is the "outside view" on how much of an existential risk asteroids are? You know, the one you get when you look at how often asteroid impacts at or near the level that can cause mass extinctions happen? Answer: very damn low.
"The Outside View" isn't just a slogan you can chant to automatically win an argument. Despite the observational evidence from common usage the phrase doesn't mean "Wow! You guys who disagree with me are nerds. Sophisticated people think like I do. If you want to be cool you should agree with me to". No, you actually have to look at...
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further.
This is an unsubstantiated assertion presented with the form of something that should be conclusive. This is bizarre since the SIAI position on Tool AI is not a particular weak point in the SIAI position and the referenced conversation doesn't indicate any withdrawal from reality.
The actual people at SIAI are much less prone to this than the community.
When I was living in San Francisco, people would regularly discuss various experiments that they were running on themselves, or skills that they were practicing. If I tried to assert something without concrete examples or predictions, people would be skeptical.
OK, I believe we have more than enough information to consider him identified now:
Those are the currently known sockpuppets of Dmytry. This one warrants no further benefit of the doubt. It is a known troll wilfully abusing the system. To put it mildly, this is something I would prefer not to see encouraged.
I agree. Dmytry was OK; private_messaging was borderline but he did admit to it and I'm loathe to support the banning of a critical person who is above the level of profanity and does occasionally make good points; JaneQ was unacceptable, but starting Comment after JaneQ was found out is even more unacceptable. Especially when none of the accounts were banned in the first place! (Were this Wikipedia, I don't think anyone would have any doubts about how to deal with an editor abusing multiple socks.)
I needed clean data on how people react to various commentary here. I falsified several anti-LW hypotheses
So presumably we can expect a post soon explaining the background & procedure, giving data and perhaps predictions or hash precommitments, with an analysis of the results; all of which will also demonstrate that this is not a post hoc excuse.
edit: actually, can you reset the email of Dmytry to dmytryl at gmail ?
I can't, no. I'd guess you'd have to ask someone at Trike, and I don't know if they'd be willing to help you out...
You are using testosterone to boost performance, it also clouds social judgment severely, and in so much as I know it, I can use it to dismiss literally anything you say (hard to resist temptation to, at times).
People at the upper end of the IQ spectrum get lonely for someone smarter to talk to. It's an emotional need for an intellectual peer.
My cynical take on FAI is that it's a goal to organize one's life around which isn't already in the hands of professionals. I have no idea whether this is fair.
For people who are looking for a big goal so that their lives make sense, FAI is a project where it's possible to stake out territory relatively easily.
But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
IMO, a signalling explanation makes more sense. Publicly-expressed concern about moral issues signals to others what a fine fellow you are. In that context the more far-out things you care about the better. Trees, whales, simulated people, distant descendants - they all signal how much you care.
If I want to signal how much I care, I'll stick with puppies or local soup kitchens, thank you very much. That will get me a lot more warm fuzzies - and respect - from my neighbors and colleagues than making hay about a robot apocalypse.
Humans are adaptation-executers, not fitness maximisers - and evolved in tribes of not more than 100 or so. And they are exquisitely sensitive to status. As such, they will happily work way too hard to increase their status ranking in a small group, whether it makes sense from the outside view or not. (This may or may not follow failing to increase their status ranking in more mainstream groups.)
If you want to maximize respect from a broad, nonspecific community (e.g. neighbors and colleagues), that's a good strategy. If you want to maximize respect from a particular subculture, you could do better with a more specific strategy. For example, to impress your political allies, worry about upcoming elections. To impress members of your alumni organization, worry about the state of your sports team or the university president's competence. To impress folks on LessWrong, worry about a robot apocalypse.
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
I think the last sentence here is a big leap. Why is this a more plausible explanation then the idea that aspiring rationalist simply find AI-risk and FAI compelling. Furthermore, since this community was founded by someone who is deeply interested in both...
Scarcely the most cynical conceivable explanation. Here, try this one:
..."Yes," declaimed Deep Thought, "I said I'd have to think about it, didn't I? And it occurs to me that running a programme like this is bound to create an enormous amount of popular publicity for the whole area of philosophy in general. Everyone's going to have their own theories about what answer I'm eventually to come up with, and who better to capitalize on that media market than you yourself? So long as you can keep disagreeing with each other violently enough and sla
Not sure about your sampling method, but a lot of LWers I know (in NY area) are pretty busy "doing stuff". Propensity to doing stuff does not seem to negatively correlate with FAI concerns as far as I can tell.
That said this is a bit of a concern for me as a donor, which is why I think the recent increase in transparency and spinning off CFAR is a big positive sign: either the organization is going to be doing stuff in the FAI area (I consider verifiable research doing stuff, and I don't think you can do it all in bed) or not, it's going to be clear either way.
Keep in mind that real sock puppeteering is about making a strawman sock puppet, or a sock puppet that disagrees cleverly using existing argument you got but tentatively changes the view, or the like.
Sock puppets, both here and on Reddit or Wikipedia, can be used for multiple purposes, not just that.
One reason I have respect for Eliezer is HPMOR-- there's a huge amount of fan fiction, and writing something which impresses both a lot of people who like fan fiction and a lot of people who don't like fan fiction is no small achievement.
Also, it's the only story I know of which gets away with such huge shifts in emotional tone. (This may be considered a request recommendations of other comparable works.)
Furthermore, Eliezer has done a good bit to convince people to think clearly about what they're doing, and sometimes even to make useful changes in their lives as a result.
I'm less sure that he's right about FAI, but those two alone are enough to make for respect.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
If you want to do either, you have to do work, get paid, and donate to the appropriate charity. The only difference is where you donate.
I can't seem to find a charity for asteroid impact avoidance. As far as I can tell, everything going into that is done by g...
it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
What? really? I happen to have drunk the kool-aid and think this is an important problem. So it is very important to me that you tell me how it is I could solve FAI without doing any real work.
I was all prepared to vote this up from '"If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?'." But then you had to go and be wrong - suggesting some sort of "lazy ideas only" search process that makes no sense historically, and conflating LW and SI.
actually, let it be the last post here, I get dragged out any time I resolve to leave then check if anyone messaged me.
A two hour self-imposed exile! I think that beats even XiXiDu's record.
What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".
What do you think of contemporary theoretical physics? That is also mostly "arguing on the Internet".
Some of it yes. At the end of the day though, some of it does lead to real experiments, which need to pay rent. And some of it does quite well at that. Look for example at the recent discovery of the Higgs boson.
Ok, if what you're saying is not "SI concludes this" but just that we don't really know what even the theoretical SI concludes, then I don't disagree with that and in fact have made similar points before. (See here and here.) I guess I give Eliezer and Luke more of a pass (i.e. don't criticize them heavily based on this) because it doesn't seem like any other proponent of algorithmic information theory (for example Schmidhuber or Hutter) realizes that Solomonoff Induction may not assign most posterior probability mass to "physics sim + locat...
But an interest in rationality pulls in expertise transferable to all manner of fields, e.g. the 2011 survey result showing 56.5% agreeing with the MWI. (I certainly hope the next survey will ask how many of those saying they agree or disagree with it can solve the Schroedinger equatlon for a hydrogen atom, and also how many of those expressing a position would understand the solution to the Schroedinger equatlon for a hydrogen atom if they saw it written out.) So acquiring meaningful knowledge of artificial intelligence is par for the course.
57%, incidentally, is almost exactly equal to the results of one poll of cosmologists/physicists.
I'm not sure about the community at large, but as for some people, like Eliezer, they have very good reasons for why working on FAI actually makes the most sense for them to do, and they've gone to great lengths to explain this. So if you want to limit your cynicism to "armchair rationalists" in the community, fine, but I certainly don't think this extends to the actual pros.
Also, I only thought of that when you would go on how I must feel so humiliated by some comment of yours
(I did not make such a claim, nor would I make one.)
SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
I think it is more that you can't build a genuine General Intelligence before you have solved some intractable mathematical problems, like pr...
Suppose there was a suspicion 2..3 people with particularly strong view just decided to pick on your account to downvote? (from back when it was Dmytry) How do you actually check that?
Or, you know, get over it. It's just karma!
My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"
Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.