I think most people on this site (including me and you, private messaging/Dmytry) don't have any particular insight that gives them more information than those who seriously thought about this for a long time (like Eliezer, Ben Goertzel, Robin Hanson, Holden Karnofsky, Lukeprog, possibly Wei Dai, cousin_it, etc.), so our opinion on "who is right" is not worth much.
I'd much rather see an attempt to cleanly map out where knowledgeable people disagree, rather than polls of what ignorant people like me think.
Similarly, if two senior economists have a public disagreement about international trade and fiscal policy, a poll of a bunch of graduate students on those issues is not going to provide much new information to either economist.
(I don't really know how to phrase this argument cleanly, help and suggestions welcome, I'm just trying to retranscribe my general feeling of "I don't even know enough to answer, and I suspect neither to most people here")
(I don't really know how to phrase this argument cleanly, help and suggestions welcome, I'm just trying to retranscribe my general feeling of "I don't even know enough to answer, and I suspect neither to most people here")
I would phrase it as holding off judgement until we hear further information, i.e. SI's response to this. And in addition to the reasons you give, not deciding who's right ahead of time helps us avoid becoming attached to one side.
The primatologists' intuitions would probably stem from their direct observations of chimps. I would trust their intuitions much less if they were based on long serious thinking about primates without any observation, which is likely the more precise analogy of the positions held in the AI risk debate.
If you're actually interested in the answer to the question you describe yourself as wondering about, you might consider setting up a poll.
Conversely, if you're actually interested in expressing the belief that Holden is essentially correct while phrasing it as a rhetorical question for the usual reasons, then a poll isn't at all necessary.
I agree with HK that at this point SI should not be one of the priority charities supported by GiveWell, mainly due to the lack of demonstrated progress in the stated area of AI risk evaluation. If and when SI publishes peer-reviewed papers containing new insights into the subject matter, clearly demonstrating the dangers of AGI and providing a hard-to-dispute probability estimate of the UFAI takeover within a given time frame, as well as outlining constructive ways to mitigate this risk ("solve the friendliness problem" is too vague), GiveWell should reevaluate its stance.
On the other hand, the soon-to-be-spawned Applied Rationality org will have to be evaluated on its own merits, and is likely to have easier time of meeting GiveWell's requirements, mostly because the relevant metrics (of "raising the sanity waterline") can be made so much more concrete and near-term.
I found HK's analysis largely sound (based on what I could follow, anyway), but it didn't have much of an effect on my donation practices. The following outlines my reasoning for doing what I do.
I have no feasible way to evaluate SIAI's work firsthand. I couldn't do that even if their findings were publicly available, and it's my default policy to reject the idea of donating to anyone whose claims I can't understand. If donating were a purely technical question, and if it came down to nothing but my estimate of SIAI's chances of actually making groundbreaking research, I wouldn't bet on them to be the first to build an AGI, never mind a FAI. (Also, on a more cynical note, if SIAI were simply an elaborate con job instead of a genuine research effort, I honestly wouldn't expect to see much of a difference.)
However, I can accept the core arguments for fast AI and uFAI to such a degree that I think the issue needs addressing, whatever that answer turns out to be. I view the AI risk PR work SIAI does as their most important contribution to date. Even if they never publish anything again, starting today, and even if they'll never have a line of code to show for anything, I estimate their...
I believe that SI is a valuable organisation and would be pleased if they were to keep their current level of funding.
I believe that withholding funds won't work very well and that they are rational and intelligent enough to sooner or later become aware of their shortcomings and update accordingly.
Do you feel this conflicts with opinions expressed on your blog? If not, why not?
Your question demands a thoughtful reply. I don't have the time to do so right now.
Maybe the following snippet from a conversation with Holden can shed some light on what is really a very complicated subject:
I even believe that SIAI, even given its shortcomings, is valuable. It makes people think, especially the AI/CS crowd, and causes debate.
I certainly do not envy you for having to decide if it is a worthwhile charity.
What I am saying is that I wouldn't mind if it kept its current funding. Although if I believed that there was even a small chance that they could be building the kind of AI that they envision, then in that case I would probably actively try to make them lose funding.
My position is probably inconsistent and highly volatile.
Just think about it this way. If you asked me if I do desire a world state where people like Eliezer Yudkowsky are able to think about AI risks, then I would say yes. If you asked me how come I wouldn't allocate the money to protect poor people against malaria, then I can only admit that I don't have a good answer. That is an extremely difficult problem.
As I said, ...
Having only read the headline, I came to this thread with the intention of saying that I agree with much of what he said, up to and potentially including withholding further funds from SI.
But then I read the post and find it's asking a different but related question, paraphrased as, "Why doesn't SI just lay down and die now that everyone knows none of their arguments have a basis in reality?" Which I'm inclined to disagree with.
I agree with Holden and additionally it looks like AGI discussions have most of the properties of mindkilling.
These discussions are about policy. They are about policy affecting medium-to-far future. These policies cannot be founded in reliably scientific evidence. Bayesian inquiry heavily depends on priors, and there is nowhere near anough data for tipping the point.
As someone who practices programming and has studied CS, I find Hanson and AI researchers and Holden more convincing than Eliezer_Yudkowsky or lukeprog. But this is more prior-based than evidence-based. Nearly all that the arguments by both sides do is just bringing a system to your priors. I cannot judge which side gives more odds-changing data because arguments from one side make way more sense and I cannot factor out the original prior dissonance with the other side.
The arguments about "optimization done better" don't tell us anything about position of fundamental limits to each kind of optimization; with a fixed computronium type it is not clear that any kind of head start would ensure that a single instance of AI would beat an instance based on 10x computronium older than 1 week (and partitioning the wo...
relatively recent "So you want to be a Seed AI Programmer" by Eliezer_Yudkowsky [...] maybe it should be either declared obsolete in public
(I believe that document was originally written circa 2002 or 2003, the copy mirrored from the Transhumanist Wiki (which includes comments as recent as 2009) being itself a mirror. "Obsolete" seems accurate.)
Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome? I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this.
My immediate reaction to this was "as opposed to doing what?" In this segment it seems like it is argued that SI's work, raising awareness that not all paths to AI are safe, and that we should strive to find safer paths towards AI, is actually making it more likely that an undesirable AI / Singularity will be spawned in the future. Can someone explain me how not discussing such issues and not working on them would be safer?
Just having that bottom line unresolved in Holden's post makes me reluctant to accept the rest of the argument.
Artificial Intelligence dates back to 1960. Fifty years later it has failed in such a humiliating way that it was not enough to move the goal posts; the old, heavy wooden goal posts have been burned and replaced with light weight portable aluminium goal posts, suitable for celebrating such achievements as from time to time occur.
Mainstream researchers have taken the history on board and now sit at their keyboards typing in code to hand-craft individual, focused solutions to each sub-challenge. Driving a car uses drive-a-car vision. Picking a nut and bolt from a component bin has nut-and-bolt vision. There is no generic see-vision. This kind of work cannot go FOOM for deep structural reasons. All the scary AI knowledge, the kind of knowledge that the pioneers of the 1960's dreamed of, stays in the brains of the human researchers. The humans write the code. Though they use meta-programming, it is always "well-founded" in the sense that level n writes level n-1, all the way down to level 0. There is no level n code rewriting level n. That is why it cannot go FOOM.
Importantly, this restraint is enforced by a different kind of self-interest than avoiding existential risk. The...
I agree with Holden about everything.
Edit: Not that I'm complaining, but why is this upvoted? It's rather low on content.
Please use more links. You should link to the post you're referring to, and probably a link to who Holden is and maybe even to what SI is.
I agree with HK that SIAI is not one of the best charities currently out there. I also agree with him that UFAI is a threat, and getting FAI is very difficult. I do not agree with HK's views on "tools" as opposed to "agents", primarily because I do not understand them fully. However I am fairly confident that if I did understand them I would disagree. I currently send all my charitable donations to AMF, but am open to starting to support SIAI when I see them publish more (peer-reviewed) material.
I believe SIAI believes it needs to prese...
SI is a very narrowly focused institute. If you don't buy the whole argument, there's very little reason to donate. I'm not sure SI should dissolve, I think they can reform. It's pretty obvious from their output that SI is essentially a machine ethics think tank. The obvious path to reform is greater pluralism and greater relevance to current debate. SI could focus on being the premiere machine ethics think tank, get involved in current ethical debates around the uses of AI, develop a more flexible ethical framework, and keep the Friendliness and Intellige...
I generally expect that broad-focus organizations with a lot of resources and multiple constituencies will end up spending a LOT of their resources on internal status struggles. Given what little I've seen about SI's skill and expertise at managing the internal politics of such an arrangement, I would expect the current staff to be promptly displaced by more skillful politicians if they went down this road, and the projects of interest to that staff to end up with even fewer resources than they have now.
Given what little I've seen about SI's skill and expertise at managing the internal politics of such an arrangement, I would expect the current staff to be promptly displaced by more skillful politicians if they went down this road, and the projects of interest to that staff to end up with even fewer resources than they have now.
I think this has already happened to some extent. Reflective people who have good epistemic habits but who don't get shit done have had their influence over SingInst policy taken away while lots of influence has been granted to people like Luke and Louie who get lots of shit done and who make the organization look a lot prettier but whose epistemic habits are, in my eyes, relatively suspect.
If you don't buy the whole argument, there's very little reason to donate
I disagree, and so apparently do some of SI's major donors.
I entered this post expecting to discuss Holden Caulfield and even pulled my copy off my bookshelf. Another time.
I expect the process of rigorously formalizing strong intuitions in a somewhat adversarial setting--or "improving the presentation of the argument"--to present strong evidence on the severity of the problems Holden pointed out.
The problem with that is that a basic rationality issue is to ask one's self what would make you change your mind. And in fact that's a pretty useful technique. It is useful to check if something is actually someone's true rejection, but that's a distinct from blanket assumptions of disbelief. Frankly, this also worries me, because I try to be clear what would actually convince me when I'm having a disagreement with someone, and your attitude if it became widespread would make that actively unproductive. It might make more sense to instead look carefully at when people say that sort of thing and see if they have any history of actually changing their positions when confronted with evidence or not.
I was wondering - what fraction of people here agree with Holden's advice regarding donations
Prior to reading Holden's article, I my last charitable donation had been to an organization working on fighting malaria recommended by Give Well, and I was tentatively planning on following Give Well's recommendations for future charitable giving. In that sense, I already agreed with Holden, though was semi-agnostic on what was actually the best use of my money.
It seemed to me that the payoff from donating to the Singularity Institute was highly uncertain, whe...
Broadly speaking, I agree with Holden, although possibly not his specific arguments. I'm not convinced that AI will appear in the manner SI postulates, and I have no real reason to believe that they will have an impact on existential probabilities. Similarly, I don't believe that donating to CND helped overt nuclear war.
Given that there are effective charities available which can make an immediate difference to people's lives, I would argue that concerned individuals should donate to those.
It is not how probable a really powerful AI is, it is how probable TIMES its impact, of course. And this product is just HUGE in the absolute sense, what people tend to forget by the mistaken reasoning "0 times something = 0". The first zero is not zero, and not even very small, so the second zero isn't a zero either.
Therefore I am glad that there is SIAI, after all. At least I find it more important than the most of the Academia involved in AI. It was this Academia who maybe failed in AI research in the past decades. Not the SIAI, not the IBM an...
I was wondering - what fraction of people here agree with Holden's advice regarding donations, and his arguments? What fraction assumes there is a good chance he is essentially correct? What fraction finds it necessary to determine whenever Holden is essentially correct in his assessment, before working on counter argumentation, acknowledging that such investigation should be able to result in dissolution or suspension of SI?
It would seem to me, from the response, that the chosen course of action is to try to improve the presentation of the argument, rather than to try to verify truth values of the assertions (with the non-negligible likelihood of assertions being found false instead). This strikes me as very odd stance.
Ultimately: why SI seems certain that it has badly presented some valid reasoning, rather than tried to present some invalid reasoning?
edit: I am interested in knowing why people agree/disagree with Holden, and what likehood they give to him being essentially correct, rather than a number or a ratio (that would be subject to selection bias).