Roko comments on (One reason) why capitalism is much maligned - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
This seems good to me from the little that I know.
See point 2 of http://blog.givewell.org/2010/05/26/thoughts-on-moonshine-or-the-kids/
In my opinion the overall giving record of the super-rich is appalling and I strain to find a meaningful sense in which the above statement is true. I don't think that it's clear that the super-rich show more demonstrated psychological capability to spend time and money on the greater good than fathers in Africa do.
According to http://features.blogs.fortune.cnn.com/2010/06/16/gates-buffett-600-billion-dollar-philanthropy-challenge/
"The IRS facts for 2007 show that the 400 biggest taxpayers had a total adjusted income of $138 billion, and just over $11 billion was taken as a charitable deduction, a proportion of about 8%...Is it possible that annual giving misses the bigger picture? One could imagine that the very rich build their net worth during their lifetimes and then put large charitable bequests into their wills. Estate tax data, unfortunately, make hash of that scenario, as 2008 statistics show."
It should be kept in mind that (a) there are a few very big donors who drag the mean up and (b) much of the money donated by the super-rich is donated for signaling reasons without a view toward maximizing positive impact.
It's not clear that funding SIAI and FHI has positive expected value.
At http://blog.givewell.org/2009/05/07/small-unproven-charities/ Holden Karnofsky points out that
"[Funding a small charity carries a risk that] it succeeds financially but not programmatically – that with your help, it builds a community of donors that connect with it emotionally but don’t hold it accountable for impact. It then goes on to exist for years, even decades, without either making a difference or truly investigating whether it’s making a difference. It eats up money and human capital that could have saved lives in another organization’s hands.
As a donor, you have to consider this a disaster that has no true analogue in the for-profit world. I believe that such a disaster is a very common outcome, judging simply by the large number of charities that go for years without ever even appearing to investigate their impact. I believe you should consider such a disaster to be the default outcome for an new, untested charity, unless you have very strong reasons to believe that this one will be exceptional."
The "saving lives" reference may not be relevant, but the fact remains that by funding SIAI and FHI when these organizations have not demonstrated high levels of accountability, donors to these organizations may systematically increase rather than decrease existential risk.
See Holden's remarks on SIAI at the comment linked under http://blog.givewell.org/2010/06/29/singularity-summit/
Agree with this.
At the same time, I would say that too much inequality may be bad for economic growth. In practice, too much inequality seems to give rise to political instability and interferes with the ability of very bright children born to poor parents to make the most of their talents.
Okay, fine: I currently believe that funding SIAI and FHI has expected value near zero but my belief on this matter is unstable and subject to rapid change with incoming evidence.
As I see it, most of current worth of SIAI is in focusing attention on the problem of FAI, and it doesn't need to produce any actual research on AI to make progress on that goal. The mere presence of this organization allows people like me to (1) recognize the problem of FAI, something you are unlikely to figure out or see as important on your own and (2) see the level of support for the cause, and as a result be more comfortable about seriously devoting time to studying the problem (in particular, extensive discussion by many smart people on Less Wrong and elsewhere gives more confidence that the idea is not a mirage).
Initially, most of the progress in this direction was produced personally by Eliezer, but now SIAI is strong enough to carry on. Publicity causes more people to seriously think about the problem, which will eventually lead to technical progress, if it's possible at all, regardless of whether current SIAI is capable of making that progress.
This makes current SIAI clearly valuable, because whatever is the truth about possible paths towards FAI, it takes a significant effort to explore them, and SIAI calls attention to that task. If SIAI can make progress on the technical problem as well, more power to them. If other people begin to make technical progress, they now have the option of affiliating with SIAI, which might be a significant improvement over personally trying to fight for funding on FAI research.
Not all publicity is good publicity. The majority of people who I've met off of Less Wrong who have heard of SIAI think that the organization is full of crazy people. A lot of these people are smart. Some of these people have Ph.D.'s from top tier universities in sciences.
I think that SIAI should be putting way more emphasis on PR, networking within academic, etc. This is in consonance with a comment by Holden Karnofsky here
To the extent that your activities will require “beating” other organizations (in advocacy, in speed of innovation, etc.), what are the skills and backgrounds of your staffers that are relevant to their ability to do this?
I'm worried that SIAI's poor ability to make a good public impression may poison the cause of existential risk in the mind of the public and dissuade good researchers from studying existential risk. There are some very smart people who it would be good to have working on Friendly AI who, despite their capabilities, care a lot about their status in broader society. I think that it's very important that an organization that works toward Friendly AI at least be well regarded by a sizable minority people in the scientific community.
In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.
We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.
Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".
Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."
Agreed; I've had similar thoughts. Given recent popular coverage of the various things called "the Singularity", I think we need to accept that it's pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil's predictions.
I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that's because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it's fair to describe SIAI as still being fundamentally about FAI (at least to anyone who's adequately prepared to think about FAI).
Describing it as "a philosophy institute researching hugely important fundamental questions" may give people the wrong impressions, if it's not quickly followed by more specific explanation. When people think of "philosophy" + "hugely important fundamental questions", their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. ("Philosophy" is another term I'm inclined toward avoiding these days.) When I've had to describe SIAI in one phrase to people who have never heard of it, I've been calling it an "artificial intelligence think-tank". Meanwhile, Michael Vassar's Twitter describes SIAI as a "decision theory think-tank". That's probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where "decision theory" already refers to an interesting established field that's relevant to AI but doesn't share with "artificial intelligence" the connotations of missed goals, science fiction geekery, anthropomorphism, etc.
I'm pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.
If you asked the same people about the idea of FAI fifteen years ago, say, they'd label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you'd need to at least point out specific actions to attempt this argument).
Good point - I will write to SIAI about this matter.
I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding
Agree with
There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.
I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
A truly devious plan.
I think that's a clever idea that deserves more eyeballs.
Nothing whatsoever is a bit strong. About as much as preventing tiger attacks and fighting malaria, perhaps?
Saving tigers from killer robots.
This video addresses this question : Anna Salamon's 2nd Talk at Singularity Summit 2009 -- How Much it Matters to Know What Matters: A Back of the Envelope Calculation
It is 15 minutes long, but you can take a look at 11m37s
Edit : added the name of the video, thanks for the remark Vladimir.
The link above is Anna Salamon's 2nd Talk at Singularity Summit 2009 "How Much it Matters to Know What Matters: A Back of the Envelope Calculation."
(You should give some hint of the content of a link you give, at least the title of the talk.)
Okay, let's try again. My current belief is that at present, donations to SIAI are a less cost effective way of accomplishing good than donating to a charity like VillageReach or StopTB which improves health in the developing world.
My internal reasoning is as follows:
Roughly speaking the potential upside of donating to SIAI (whatever research SIAI would get done) is outwieghed by the potential downside (the fact that SIAI could divert funding away from future existential risk organizations). By way of contrast, I'm reasonably confident that there's some upside to improving health in the developing world (keep in mind that historically, development has been associated with political stability and getting more smart people in the pool of people thinking about worthwhile things) and giving to accountable effectiveness oriented organizations will raise the standard for accountability across the philanthropic world (including existential risk charities).
I wish that there were better donation opportunities than VillageReach and StopTB and I'm moderately optimistic that some will emerge in the near future (e.g. over the next ten years) but I don't see any at the moment.
Good question. I haven't considered this point - thanks for bringing it to my consideration!
•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.
•I'm very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.
For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I'm not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don't think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.
•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn't demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.
•Thanks for mentioning the Lifeboat foundation.
I suspect helping dead states efficiently and sustainably is very difficult, possibly more so than developing FAI as a shortcut. Of course, it's a completely different kind of challenge.
I read "not clear that X has positive expected value" as something like "I'm not sure an observer with perfect knowledge of all relevant information, but not of future outcomes would assign X a positive expected value."
To clarify: No knowledge of things like the state of individual electrons or photons, and therefore no knowledge of future "random" (chaos theory) outcomes. This was one of the possible objections I had considered, but decided against addressing in advance, turns out I should have.
Logical uncertainty is also something you must fight on your own. Like you can't know what's actually in the world, if you haven't seen it, you can't know what logically follows from what you know, if you didn't perform the computation.
And that was the other possible objection I had thought of!
I had meant to include that sort of thing in "relevant knowledge", but couldn't think of any good way to phase it in the 5 seconds I thought about it. I wasn't trying to make any important argument, it was just a throwaway comment.
I don't understand what this refers to. (Objection to what? What objection? In what context did you think of it?)
I commented on the objection that being unsure whether the expected value of something is positive conflicts with the definition of expected value with:
When writing this I thought of two possible objections/comments/requests for clarification/whatever:
That perfect knowledge implies knowledge of future outcomes.
Your logical uncertainty point (though I had no good way to phrase this).
I briefly considered addressing them in advance, but decided against it. Both whatevers were made in fairly rapid succession (though yours apparently not with that comment in mind?), so I definitely should have.
There is no way that short throwaway comment deserved a seven post comment thread.