WrongBot comments on Existential Risk and Public Relations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (613)
On Eileen Barker:
I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.
Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.
Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.
On Shirley Harrison:
I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.
No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.
What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."
Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.
So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.
My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.
Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.
Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.
(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)
In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.
What exactly are Eliezer's qualifications supposed to be?
You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.
To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.
I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.
I may have an incorrect impression, but SIAI or at least Eliezer's department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.
Eliezer's past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:
Eliezer has said: "I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me." Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)
That said, "self-image comparable to the Manhattan project" is an unusually generous ascription of humility to SIAI and Eliezer. :P
They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.
I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.
Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven't said anything in this comment than I disagree with so I don't understand what we're disputing.
Great comment.
How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.
Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don't see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
You talked about Yudkowsky's influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don't think they influenced the right people.
Downvoted for this:
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu's comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they're easily construed as personal attacks which tend to spawn an emotional reaction in one's conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you're going wrong, you won't improve.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
On this blog, any person should definitely be resisting this push.
I did not say that one should avoid telling people when and where they're going wrong. I was objecting to the practice of questioning people's motivations. For the most part I don't think that questioning somebody's motivations is helpful to him or her.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn't mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
Pointing out that someone hasn't thought through the issue because they are motivated not to - this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn't let them know that they have something wrong, and they miss a chance to improve it.
To paraphrase steven, if you're interested in winning disputes you should dismiss personal attacks, but if you're interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it's a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you're right, and this whole third part of our discussion is irrelevant.
It's quite possible to be inaccurate about other people's motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.
For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".
Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.
Interesting, when did he come up with the concept of "Seed AI". Because it is mentioned in Karl Schroeder's Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Didn't find the phrase "Seed AI" there. One plot element is a "resurrection seed", which is created by an existing, mature evil AI to grow itself back together in case it's main manifestation is destroyed. A Seed AI is a different concept, it's something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don't remember recursive self-improvement being mentioned with the seed in Ventus.
A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it's own architecture, goes all the way back to Alan Turing's 1950 paper on machine intelligence.
Here is a quote from Ventus:
[...]
...and here's a quote from I.J. Good, from 1965:
He didn't coin the term "Seed AI" either.
Yes, but I believe it is a bit weird for a Wikipedia article to state that someone is the originator of the Seed AI theory when he just coined the term. I wasn't disputing anything, just trying to figure out if it is actually the case that Yudkowsky came up with the concept in the first place.
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
I don't think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.
That is simply not the form of the argument you quote. "Ben Goertzel believes in psychic phenomenon" can not be represented as "I disagree with person x ".
I'm being generous and giving the original comment credit for an implicit premise. As stated the argument is "Person x believes y, therefore person x is wrong about z." this is so obviously wrong it makes my head hurt. WrongBot's point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn't provide any evidence to that effect it reduces to 'I disagree with Goertzel about psy'.
Fair point re: "ever".
I generally don't try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.
But I'm annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:
No one has to give evidence for everything they say but when things that you thought were obviously wrong begin to get defended by physics-literate reductionist materialists that seems like a good time to lower your confidence.
Well to begin with, Goertzel's paper claims to be such a mechanism. Have you read it? I don't know if it works or not. Seems unwise to assume it doesn't though.
Publication bias, motivated cognition and effect size are all concerns and were my previous explanation. I found this meta-analysis upset that view for me.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote 'therefore' or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
There is a clear implied premise 'psychic phenomenon are well known to be bullshit'. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don't think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
It isn't even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
I think I'm just restating the exchange I had with komponisto on this point. Goertzel's position isn't that of someone who is doesn't know any physics or Enlightenment-style rationality. It is clearly a contrarian position which should be treated rather differently since we can assume he is familiar with the reasons why psychic phenomena are 'well known to be bullshit'. It is a fully generalizable tactic which can be used against all and any contrarian thinkers. Try "Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate."
The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.
Any argument of this nature needs to include some explanation of why someone's ability to think about y is linked to their ability to think about z. But even with that (which wasn't included in the comment) you can only conclude that y and z imply each other. You can't just conclude z.
In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.
I don't disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person's belief in them raises serious doubts about that person's understanding of science at the very least, if not their general rationality level.
I got the impression from Damien Broderick's book that a lot of PSI researchers do understand physics and aren't postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There's a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren't subject to the inverse square law, so this isn't a new idea.
Damien Broderick's attitude in his book is basically that there's a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel's attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than "would need extra particles" to show as nonsense.
"Not understanding basic physics" doesn't really seem to cut it in either case. "It's been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn't anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn't have" is basically the one I've got.
I'm not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I'm still waiting for someone more physics-literate to have a go at Goertzel's pilot wave paper.
This isn't someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn't obviously the case, waving our hands and throwing out these words isn't an explanation of the results. I'm going to try and make a post on this subject a priority now.
If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.
I don't take Goertzel seriously for the same reason I don't take young earth creationists seriously. It's not that I disagree with him, it's that his beliefs have almost no connection to reality.
(If it makes you feel better, I have read some of Goertzel's writing on AGI, and it's stuffed full of magical thinking.)
I'd be interested to hear more about that.
From Ten Years to a Positive Singularity:
and
From The Singularity Institute's Scary Idea (And Why I Don't Buy It):
From Chance and Consciousness:
And pretty much all of On the Algebraic Structure of Consciousness and Evolutionary Quantum Computation.
This is all just from fifteen minutes of looking around his website. I'm amazed anyone takes him seriously.
Oh...
wow.
I think that paper alone proves your point quite nicely.
"The Futility Of Emergence" really annoys me. It's a perfectly useful word. It's a statement about the map rather than about the territory, but it's a useful one. Whereas magic means "unknowable unknowns", emergent means "known unknowns" - the stuff that we know follows, we just don't know how.
e.g. Chemistry is an emergent property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they're separate sciences. But we do know we have that work to do.
Just linking to that essay every time someone you're disagreeing with says "emergent" is difficult to distinguish from applause lights.
From what I've seen, the people who comment here who have read Broderick's book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn't at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone's beliefs on the issue in a general argument about their rationality. You can't just assume it as you do here.
Yes, here WrongBot is safe to assume basic physics.
Edit for the sake of technical completeness: And biology.
Goertzel's paper on the subject is about extending the de Broglie Bohm pilot wave theory in a way that accounts for psi while being totally consistent with all known physics. Maybe it is nonsense, I haven't read it. But you can't assume it is.