Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gwern 26 May 2015 11:39:10PM *  0 points [-]

Again, the bylaws bans members of the Society for Cryobiology from practicing or endorsing cryonics, it does not mandate them to sabotage the publication of research by cryonicists. One thing does not necessarily imply the other.

Give me a break. When a professional society has declared something to be so beyond the pale that it will formally censure and expel any members who goes near it or says anything positive about it, there is going to be a chilling effect for anyone doing closely related research, and people have said as much privately.

the latter would be a gross breach of scientific ethics

Yes, it is. So? P-hacking is a gross breach of scientific ethics. Falsifying or tweaking data is a gross breach of scientific ethics. I hate to break it to you, but as a factual matter, these sorts of things happen all the time. Scientists will admit to them in anonymous surveys at high rates, and of course the more statistical sins appear in bright neon lights in any sort of meta-analysis of these topics like publication bias (hoo boy, is that a breach of medical ethics! but happens all the time anyway). Did you read the recent paper by Jussim et al on discrimination in psychology and how practicing psychologists are totally willing to, and admit to, discriminate against conservative research? (Or are you now going to argue that all conservative-connected theories must be pseudoscience put forth by cranks which this discrimination is totally fair against...?) What makes you think any cryobiologists would be fairer? Peer review fails all the time. You keep ignoring my comments on this matter - discriminating against unpopular or niche ideas is routine and common, cryonics would just be yet another instance. You keep trying to put the burden of proof on my side and implying that I'm saying something shocking, but really, I'm not; the idealized picture of scientists you have in your mind is very far from the gritty reality of academic politics, tribes (sorry, I mean, 'labs' or 'schools' or 'departments') of researchers, anonymous peer review. It's unfortunate that peer review has been rather misleadingly promoted to the public as why science works (as opposed to testing falsifiable predictions or replication), but it's not true; peer review is not necessary and not very good.

The paper is not about cryonics, is about cryopreservation of C. elegans

It is about using cryonics-oriented techniques to verify a claim of extreme interest to cryonicists, and of minimal interest to cryobiology in general (which does not care much about whole organisms or their neurological integrity, but about narrower more applied topics like gametes or organ transplants where neurons either don't exist or are largely irrelevant). The title alone screams cryonics to any peer reviewer competent enough to be reviewing it. Seeing the authors and their affiliation is merely the final straw.

they would have to be extremely prejudiced to reject it out of hand without considering its scientific merits.

Not really, any more than people elsewhere have to be 'extremely prejudiced' to yield considerably disparate impacts. Someone discriminating against blacks does not have to be constantly talking about how niggers are responsible for everything wrong in America.

And anyway, in the review protocol of most reputable journals, the authors can petition to the editor to change the referees if they have a reasonable suspicion that they may be biased.

And how would they prove that, exactly, about the anonymous peer reviewer? If the reviewer pans the paper and trumps up some ultimately minor concerns, how does one prove the reviewer was biased? How does one prove that given the extreme randomness and inconsistency of the peer review process where the same paper can be judged diametrically opposite? Has this ever been done? It's difficult enough to merely publish a letter to the editor criticizing some published research, or get a retraction of papers published about completely bogus data, there's no way one is going to convince the editor that the peer reviewer has it in for one.

Yes, because frauds never happened in cryonics.

Taking money for a service not delivered is quite different from faking research.

Please see the citations about the many serious flaws which have been demonstrated in peer review. Bias is the default.

Said every crackpot on the Internet.

Bullshit! OK, I'm done. I've argued in good faith with you. The problems with peer review are well known. There are countless studies in Google Scholar demonstrating problems in the peer review process from gender bias to country bias to publication bias to novel findings etc etc, on top of all the ones in the Wikipedia link I gave you; your blind veneration of peer review is not based on the empirical reality. Further, the war of cryobiologists on cryonics is well known and well documented, and there is zero reason to believe that cryonics-related papers would be given a fair reception. You keep making ridiculous claims like pointing out problems with peer review is the sign of a crackpot or arguing that a professional society banning a topic will somehow have no effect on research and does not indicate any biases in reviewing research on the topic, demanding proof of impossible things, and then completely ignoring when I point to available supporting evidence. This is not a debate, this is you digging your head into the sand and going 'La la la reviewers are totally fair, there is no opposition to cryonics, peer review is the best thing since sliced bread, and you can't force me to believe otherwise!' Indeed, I can't. So I will stop here.

Comment author: V_V 27 May 2015 12:23:44AM *  0 points [-]

It is about using cryonics-oriented techniques to verify a claim of extreme interest to cryonicists, and of minimal interest to cryobiology in general (which does not care much about whole organisms or their neurological integrity, but about narrower more applied topics like gametes or organ transplants where neurons either don't exist or are largely irrelevant).

I suppose that I were a neurobiologist I would find the topic of the paper very interesting. I mean, it's about cryopreservation of plastic nervous structures!
If the paper was good science and it turned out that it had been unfairly rejected by major journals I would be quite disappointed. If it turned out that there was a systemic suppression of this kind of research I would be calling it a scandal.

So where is the evidence of all this wrong doing? Where are all these unfairly rejected papers?

The problems with peer review are well known. There are countless studies in Google Scholar demonstrating problems in the peer review process from gender bias to country bias to publication bias to novel findings etc etc, on top of all the ones in the Wikipedia link I gave you; your blind veneration of peer review is not based on the empirical reality. Further, the war of cryobiologists on cryonics is well known and well documented, and there is zero reason to believe that cryonics-related papers would be given a fair reception. You keep making ridiculous claims like pointing out problems with peer review is the sign of a crackpot or arguing that a professional society banning a topic will somehow have no effect on research and does not indicate any biases in reviewing research on the topic, demanding proof of impossible things, and then completely ignoring when I point to available supporting evidence. This is not a debate, this is you digging your head into the sand and going 'La la la reviewers are totally fair, there is no opposition to cryonics, peer review is the best thing since sliced bread, and you can't force me to believe otherwise!' Indeed, I can't.

I think you are strawmanning my position.

I'm not claiming that the peer review system is totally fair. I can even concede that it may be biased against cryonics, to the effect that a cryonics-related paper has to pass a higher bar to be accepted.

But your claim is much stronger than that. Your claim is that the peer-review system is so much biased that it has effectively managed to systematically keep scientifically sound cryonics-related research off all major journals.
This is an extraordinary claim. Extraordinary claims need extraordinary evidence.
This is also a claim that it would be easy to verify if it was true: just produce the unfairly rejected papers with the reviewers comments. This is the type of claims where absence of evidence is evidence of absence.

Comment author: gwern 26 May 2015 06:09:51PM *  4 points [-]

I don't see how experiments on worms would have anything to do with it.

Now you are playing dumb. We are talking about chilling effects, and there are not that many cryobiologists (or cryonicists, for that matter). Everyone has gotten the message sent by the ban.

Oh you're right. And in the related news, global warming doesn't exist, evolution is a hoax, vaccines cause autism, etc...

What on earth are you talking about? The ban is right there in the bylaws. I don't need to misinterpret any hacked emails to talk about it or make up data like Wakefield did and the anti-vaxers do. You seem to be blinded by the phrase 'conspiracy theory'. Small groups organize all the time to promote or criticize particular theories in science, and even you admit the existence of the ban and hostility to cryonics; to paraphrase Patrick Henry, if that be conspiracy theorizing, make the most of it!

The ban's obvious rationale is that cryobiologists believe that cryonics is a pseudo-scientific practice and they don't want the reputation of their field to be tarnished by association with cryonics. You seem to claim that the ban is a matter of personal or tribal hatred

A lot of people do have very emotional reactions to cryonics, but I don't need to prove it's personal or tribal hatred, just point out that any papers to do with cryonics are not going to be treated the same. Whether the peer reviewer believes cryonics is absurd and the work must be wrong and is just searching for an excuse to reject it, or whether they personally hate Mike Darwin because he said something mean to them 40 years ago, doesn't make a difference.

As I said, this is a very serious allegation and it should be backed by evidence.

Please see the citations about the many serious flaws which have been demonstrated in peer review. Bias is the default. If you believe that peer reviews of cryonics papers will be shining exceptions, that should be backed by evidence because that would be a truly remarkable and extraordinary claim.

Dismissing the lack of scientific publications in favor for your pet position by accusing mainstream scientists of being biased is an overly general argument

That they are biased is not in question. The difference is that things like anti-vaxers have been disproven time and again and often to be based on fraud or deception, and have no experimental evidence and are not simple extrapolations of current theories, whereas cryonics, while still unproven and highly speculative, is none of those. There's a difference between proto and pseudo science.

Comment author: V_V 26 May 2015 09:00:16PM *  0 points [-]

The ban is right there in the bylaws.

Again, the bylaws bans members of the Society for Cryobiology from practicing or endorsing cryonics, it does not mandate them to sabotage the publication of research by cryonicists. One thing does not necessarily imply the other.
The former is an unusual, perhaps controversial, but IMHO understandable rule that does not constitute professional impropriety, the latter would be a gross breach of scientific ethics. If you want to claim that cryobiologists are doing the latter anyway, then you need evidence.

just point out that any papers to do with cryonics are not going to be treated the same.

The paper is not about cryonics, is about cryopreservation of C. elegans, the only connections with cryonics are the authors' affiliation and the acknowledgment section, which would be not even visible to the referees if the journal used a double-blind review protocol.
Even assuming that the referees could guess from the content that the paper was coming from cryonicists, they would have to be extremely prejudiced to reject it out of hand without considering its scientific merits.

And anyway, in the review protocol of most reputable journals, the authors can petition to the editor to change the referees if they have a reasonable suspicion that they may be biased. Am I to believe that the Society for Cryobiology has so much influence on all the major journals that the editors couldn't find any unbiased referee?

Please see the citations about the many serious flaws which have been demonstrated in peer review. Bias is the default.

Said every crackpot on the Internet.
Peer review has many flaws, but the consistent suppression of correct but unpopular scientific theories by an interest group is not one of them, as far as we know.

Again, evidence please. If what you are saying is true, there should be tons of good scientific articles from cryonicists that were rejected with flimsy excuses. Cryonicists could make them public and scientists from contiguous sub-fields (e.g. neurobiologists, who often use cryopreservation techniques in their research) would notice. It would be a major scandal that would forever destroy the reputation of mainstream cryobiolgists. Cryonicists have in their interest the destruction of the reputation of their mortal enemies.
So why this does not happen? Maybe because this heap of unfairly rejected articles does not exist?

That they are biased is not in question.

Actually, it is in question. But even if you assume that their are biased, it doesn't follow that they are blacklisting cryonicists from publishing.

The difference is that things like anti-vaxers have been disproven time and again and often to be based on fraud or deception, and have no experimental evidence and are not simple extrapolations of current theories, whereas cryonics, while still unproven and highly speculative, is none of those.

Yes, because frauds never happened in cryonics.

There's a difference between proto and pseudo science.

Sure. I have a cold fusion reactor to sell you...

Comment author: gwern 26 May 2015 05:00:22PM *  5 points [-]

some sort of conspiracy by cryobiologists to prevent cryonicists from publishing...the Society for Cryobiology officially bans its members from practicing or endorsing cyonics

Yes. Some sort of conspiracy. I don't know why anyone would think that. What an odd thing to think.

it has no position about preventing people associated with cryonics organization from publishing research.

'Comrades, good news. You are free to research and publish anything you want about capitalist economics, as long as it's negative and does not endorse or practice it. Let 100 flowers bloom!'

I would say that this is a very serious accusation of professional misconduct and you should not make it unless you can back it with evidence.

Are you arguing that despite bitter hatred and an astonishing policy outright banning cryonics, this has zero influence on the notoriously politicized, inconsistent, random, risk-averse scientific publication process which has been amply documented to settle for lowest common denominators, punish ambitious work, express peer reviewers' personal prejudices in discriminating against minorities, conservatives, etc? You think that somehow cryonics papers will be an exception to all this, will get a free pass and be fairly and impartially evaluated by its sworn enemies?

Comment author: V_V 26 May 2015 05:57:45PM *  -2 points [-]

'Comrades, good news. You are free to research and publish anything you want about capitalist economics, as long as it's negative and does not endorse or practice it. Let 100 flowers bloom!'

The ban is only for members of the Society for Cryobiology and concerns supporting human cryopreservation. I don't see how experiments on worms would have anything to do with it.

Are you arguing that despite bitter hatred and an astonishing policy outright banning cryonics, this has zero influence on the notoriously politicized, inconsistent, random, risk-averse scientific publication process which has been amply documented to settle for lowest common denominators, punish ambitious work, express peer reviewers' personal prejudices in discriminating against minorities, conservatives, etc? You think that somehow cryonics papers will be an exception to all this, will get a free pass and be fairly and impartially evaluated by its sworn enemies?

Oh you're right. And in the related news, global warming doesn't exist, evolution is a hoax, vaccines cause autism, etc...

The ban's obvious rationale is that cryobiologists believe that cryonics is a pseudo-scientific practice and they don't want the reputation of their field to be tarnished by association with it.

You seem to claim that the ban is a matter of personal or tribal hatred and cryobiologists are acting like a religious cult trying to do everything in its power to undermine the heretics even by suppressing perfectly good cryobiology research.
As I said, this is a very serious allegation and it should be backed by evidence.

Dismissing the lack of scientific publications in favor for your pet position by accusing mainstream scientists of being biased is an overly general argument that could be used and is in fact used to support every crackpot theory out there.

Comment author: gwern 24 May 2015 03:33:33PM 7 points [-]
Comment author: V_V 26 May 2015 03:24:02PM *  -2 points [-]

It seems that you are trying to argue that there is some sort of conspiracy by cryobiologists to prevent cryonicists from publishing in high-impact journals.

If I understand correctly, the Society for Cryobiology officially bans its members from practicing or endorsing cyonics (defined as the cryopreservation of human corpses for the purpose of reanimation), but it has no position about preventing people associated with cryonics organization from publishing research.

If you want to claim that cryobiologists are covertly suppressing research by cryonicists by lobbying journal editors or abusing the peer review system, I would say that this is a very serious accusation of professional misconduct and you should not make it unless you can back it with evidence.

Comment author: RobbBB 22 May 2015 08:31:03PM *  7 points [-]

Maybe they are not explicitly saying "near-optimal", but it seems to me that they are using models like Solomonoff Induction and AIXI as intuition pumps, and they are getting these beliefs of extreme intelligence from there.

I don't think anyone at MIRI arrived at worries like 'AI might be able to deceive their programmers' or 'AI might be able to design powerful pathogens' by staring at the equation for AIXI or AIXItl. AIXI is a useful idea because it's well-specified enough to let us have conversations that are more than just 'here are my vague intuitions vs. your vague-intuitions'; it's math that isn't quite the right math to directly answer our questions, but at least gets us outside of our own heads, in much the same way that an empirical study can be useful even if it can't directly answer our questions.

Investigating mathematical and scientific problems that are near to the philosophical problems we care about is a good idea, when we still don't understand the philosophical problem well enough to directly formalize or test it, because it serves as a point of contact with a domain that isn't just 'more vague human intuitions'. Historically this has often been a good way to make intellectual progress, though it's important to keep in mind just how limited our results are.

AIXI is also useful because the problems we couldn't solve even if we (impossibly) had recourse to AIXI often overlap with the problems where our theoretical understanding of intelligence is especially lacking, and where we may therefore want to concentrate our early research efforts.

The idea that AI will have various 'superpowers' comes more from:

  • (a) the thought that humans often vary a lot in how much they exhibit the power (without appearing to vary all that much in hardware);

  • (b) the thought that human brains have known hardware limitations, where existing machines (and a fortiori machines 50 or 100 years down the line) can surpass humans by many orders of magnitude; and

  • (c) the thought that humans have many unnecessary software limitations, including cases where machines currently outperform humans. There's also no special reason to expect evolution's first stab at technology-capable intelligence to have stumbled on all the best possible software ideas.

A more common intuition pump is to simply note that limitations in human brains suggest speed superintelligence is possible, and it's relatively easy to imagine speed superintelligence allowing one to perform extraordinary feats without imagining other, less well-understood forms of cognitive achievement. Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.

Anyway, do you disagree that MIRI in general expects the kind of low-data, low-experimentation, prior-driven learning that I talked about to be practically possible?

This should be possible to some extent, especially when it comes to progress in mathematics. We should also distinguish software experiments from physical experiments, since it's a lot harder to keep an AI from performing the former, and the former are much easier to speed up in proportion to speed-ups in the experimenter's ability to analyze results.

I don't think there's any specific consensus view about how much progress requires waiting for results from slow experiments. I frequently hear Luke raise the possibility that slow natural processes could limit rates of self-improvement in AI, but I don't know whether he considers that a major consideration or a minor one.

Comment author: V_V 24 May 2015 09:09:59PM *  2 points [-]

I don't think anyone at MIRI arrived at worries like 'AI might be able to deceive their programmers' or 'AI might be able to design powerful pathogens' by staring at the equation for AIXI or AIXItl.

In his quantum physics sequence, where he constantly talks (rants, actually) about Solomonoff Induction, Yudkowsky writes:

A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.

Anna Salamon also mentions AIXI when discussing the feasibility of super-intelligence.

Mind you, I'm not saying that AIXI is not an interesting and possibly theoretically useful model, my objection is that MIRI people seem to have used it to set a reference class for their intuitions about super-intelligence.

Rates of cultural and technological progress in human societies are a better (though still very imperfect) source of data than AIXI about how much improvement intelligence makes possible.

Extrapolation is always an epistemically questionable endeavor.
Intelligence is intrinsically limited by how predictable the world is. Efficiency (time complexity/space complexity/energy complexity/etc.) of algorithms for any computational task is bounded. Hardware resources also have physical limits.

This doesn't mean that given our current understanding we can claim that human-level intelligence is an upper bound. That would be most likely false. But there is no particular reason to assume that the physically attainable bound will be enormously higher than human-level. The more extreme the scenario, the less probability we should assign to it, reasonably according to a light-tailed distribution.

This should be possible to some extent, especially when it comes to progress in mathematics.

Ok, but my point is that it has not been established that progress in mathematics will automatically grant an AI "superpowers" in the physical world.
And I'd even say that even superpowers by raw cognitive power alone are questionable. Theorem proving can be sped up, but there is more to math than theorem proving.

Comment author: RobbBB 22 May 2015 07:39:19PM *  6 points [-]

Cite?

Müller and Bostrom's 2014 'Future progress in artificial intelligence: A survey of expert opinion' surveyed the 100 top-cited living authors in Microsoft Academic Search's "artificial intelligence" category, asking the question:

Define a "high-level machine intelligence" (HLMI) as one that can carry out most human professions at least as well as a typical human. [...] For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI to exist?

29 of the authors responded. Their median answer was a 10% probability of HLMI by 2024, a 50% probability of HLMI by 2050, and a 90% probability by 2070.

(This excludes how many said "never"; I can't find info on whether any of the authors gave that answer, but in pooled results that also include 141 people from surveys of a "Philosophy and Theory of AI" conference, an "Artificial General Intelligence" conference, an "Impacts and Risks of Artificial General Intelligence" conference, and members of the Greek Association for Artificial Intelligence, 1.2% of the people in the overall pool (2 / 170) said we'd "never" have a 10% chance of HLMI, 4.1% (7 / 170) said "never" for 50% probability, and 16.5% (28 / 170) said "never" for 90%.)

In Bostrom's Superintelligence (pp. 19-20), he cites the pooled results:

The combined sample gave the following (median) estimate: 10% probability of HLMI by 2022, 50% probability by 2040, and 90% probability by 2075. [...]

These numbers should be taken with some grains of salt: sample sizes are quite small and not necessarily representative of the general expert population. They are, however, in concordance with results from other surveys.

The survey results are also in line with some recently published interviews with about two-dozen researchers in AI-related fields. For example, Nils Nilsson has spent a long and productive career working on problems in search, planning, knowledge representation, and robotics, he has authored textbooks in artificial intelligence; and he has recently completed the most comprehensive history of the field written to date. When asked about arrival dates for [AI able to perform around 80% of jobs as well or better than humans perform], he offered the following opinion: 10% chance: 2030[;] 50% chance: 2050[;] 90% chance: 2100[.]

Judging from the published interview transcripts, Professor Nilsson's probability distribution appears to be quite representative of many experts in the area--though again it must be emphasized that there is a wide spread of opinion: there are practitioners who are substantially more boosterish, confidently expecting HLMI in the 2020-40 range, and others who are confident either that it will never happen or that it is indefinitely far off. In addition, some interviewees feel that the notion of a "human level" of artificial intelligence is ill-defined or misleading, or are for other reasons reluctant to go on record with a quantitative prediction.

My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates. A 10% probability of HLMI not having been developed by 2075 or even 2100 (after conditionalizing on "human scientific activity continuing without major negative disruption") seems too low.

Luke has pretty much the same view as Bostrom. I don't know as much about Eliezer's views, but the last time I talked with him about this (in 2014), he didn't expect AGI to be here in 20 years. I think a pretty widely accepted view at MIRI and FHI is Luke's: "We can’t be confident AI will come in the next 30 years, and we can’t be confident it’ll take more than 100 years, and anyone who is confident of either claim is pretending to know too much."

Comment author: V_V 22 May 2015 07:59:25PM 3 points [-]

Thanks!

Comment author: RobbBB 22 May 2015 01:00:01AM *  9 points [-]

I think that the AI-risk advocates tend to exaggerate various elements of their analysis: they probably underestimate time to human-level AI and time to super-human AI

It's worth keeping in mind that AI-risk advocates tend to be less confident that AGI is nigh than the top-cited scientists within AI are. People I know at MIRI and FHI are worried about AGI because it looks like a technology that's many decades away, but one where associated safety technologies are even more decades away.

That's consistent with the possibility that your criticism could turn out to be right. It could be that we're less wrong than others on this metric and yet still very badly wrong in absolute terms. To make a strong prediction in this area is to claim to already have a pretty good computational understanding of how general intelligence works.

Moreover, it seems that they tend to conflate super-intelligence with a sort of near-omniscience: They seem to assume that a super-intelligent agent will be a near-optimal Bayesian reasoner

Can you give an example of a statement by a MIRI researcher that is better predicted by 'X is speaking of the AI as a near-optimal Bayesian' than by 'X is speaking of the AI as an agent that's as much smarter than humans as humans are smarter than chimpanzees, but is still nowhere near optimal'? (Or 'an agent that's as much smarter than humans as humans are smarter than dogs'...) I'm not seeing why saying 'Bob the AI could be 100x more powerful than a human', for example, commits one to a view about how close Bob is to optimal.

Comment author: V_V 22 May 2015 12:33:24PM *  1 point [-]

It's worth keeping in mind that AI-risk advocates tend to be less confident that AGI is nigh than the top-cited scientists within AI are.

Cite? I think I remember Eliezer Yudkowsky and Luke Muehlhauser going for the usual "20 years from now" (in 2009) time to AGI prediction.
By contrast Andrew Ng says "Maybe hundreds of years from now, maybe thousands of years from now".

Can you give an example of a statement by a MIRI research that is better predicted by 'X is speaking of the AI as a near-optimal Bayesian' than by 'X is speaking of the AI as an agent that's as much smarter than humans as humans are smarter than chimpanzees, but is still nowhere near optimal'?

Maybe they are not explicitly saying "near-optimal", but it seems to me that they are using models like Solomonoff Induction and AIXI as intuition pumps, and they are getting these beliefs of extreme intelligence from there.
Anyway, do you disagree that MIRI in general expects the kind of low-data, low-experimentation, prior-driven learning that I talked about to be practically possible?

Comment author: Mark_Friedenbach 22 May 2015 12:20:29AM *  8 points [-]

Maybe this is the community bias that you were talking to, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI.

You nailed it. (Your other points too.)

The claim [is] that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent['s] edge over humans.

The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.

Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.

There is however a substantial difference between a non-trivial problem and an impossible problem. Non-trivial we can work with. I solve non-trivial problems for a living. You solve a non-trivial problem by hacking at it repeatedly until it breaks into components that are themselves well understood enough to be trivial problems. It takes a lot of work, and the solution is to simply to do a lot of work.

But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility. You can't solve an impossibility! What's more, in that frame of mind any work done towards making AGI is risk-increasing. Thus people are actively persuaded to NOT work on artificial intelligence, and instead work of fields of basic mathematics which is at this time too basic or speculative to say for certain whether it would have a part in making a safe or controllable AGI.

So smart people who could be contributing to an AGI project, are now off fiddling with basic mathematics research on chalkboards instead. That is, in the view of someone who believes safe / controllable UFAI is non-trivial but possible mechanism to accelerate the arrival of life-saving anti-aging technologies, a humanitarian disaster.

Comment author: V_V 22 May 2015 12:13:46PM 4 points [-]

The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.

Agree.

I think that since many AI risk advocates have little or no experience in computer science and specifically AI research, they tend to anthropomorphize AI to some extent. They get that an AI could have goals different than human goals but they seem to think that it's intelligence would be more or less like human intelligence, only faster and with more memory. In particular they assume that an AI will easily develop a theory of mind and social intelligence from little human interaction.

But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility.

I think they used to claim that safe AGI was pretty much an impossibility unless they were the ones who built it, so gib monies plox!
Anyway, it seems that in recent times they have taken a somewhat less heavy handed approach.

Comment author: V_V 21 May 2015 11:26:59PM *  9 points [-]

I generally agree with your position on the Sequences, but it seems to me that it is possible to hang around this website and have meaningful discussions without worshiping the Sequences or Eliezer Yudkowsky. At least it works for me.
As for being a highly involved/high status member of the community, especially the offline one, I don't know.

Anyway, regarding the point about super-intelligence that you raised, I charitably interpret the position of the AI-risk advocates not as the claim that super-intelligence would be in principle outside the scope of human scientific inquiry, but as the claim that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent and edge over humans.

I think that the AI-risk advocates tend to exaggerate various elements of their analysis: they probably underestimate time to human-level AI and time to super-human AI, they may overestimate the speed and upper bounds to recursive self-improvement (their core arguments based on exponential growth seem, at best, unsupported).

Moreover, it seems that they tend to conflate super-intelligence with a sort of near-omniscience:
They seem to assume that a super-intelligent agent will be a near-optimal Bayesian reasoner with an extremely strong prior that will allow it to gain a very accurate model of the world, including all the nuances of human psychology, from a very small amount of observational evidence and little or no interventional experiments. Recent discussion here.
Maybe this is the community bias that you were talking about, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI.
It seems dubious to me that this kind of extreme inference is even physically possible, and if it is, we are certainly not anywhere close to implementing it. All the recent advances in machine learning, for instance, rely on processing very large datasets.

Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.

Comment author: Mirzhan_Irkegulov 21 May 2015 10:47:22PM 0 points [-]

Is it correct to compare CFAR with religions and mumbo jumbo? Most of self-help on the Web is crap exactly because they don't have even minimally good epistemology. I'd automatically trust CFAR more simply because they're standing on rationalist basis, which is a massive step forward. It's just this massive step forward not enough to start doing impact. It's more correct, I think, to compare CFAR with current research in CBT.

Comment author: V_V 21 May 2015 11:13:20PM 4 points [-]

Is it correct to compare CFAR with religions and mumbo jumbo?

I think it is. CFAR could be just a more sophisticated type of mumbo jumbo tailored to appeal materialists. Just because they are not talking about gods or universal quantum consciousness it doesn't mean that their approach is any more grounded in evidence.
Maybe it is, but I would like to see some replicable study about it. I'm not going to give them a free pass because they display the correct tribal insignia.

View more: Next