Comment author: AlexMennen 14 May 2016 12:09:13AM 1 point [-]

But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

I strongly disagree.

First, because there are multiple reasons that the creation of many distinct theories of friendliness would not be dangerous: The first one to get to superintelligence should be able to establish a monopoly on power, and then we wouldn't have to worry about the others. Even if that didn't happen, a reasonable decision theory should be able to cooperate with other agents with different reasonable decision theories, when it is in both of their interests to do so. And even if we end up with multiple friendly AIs that are not great at cooperation, it is a particularly easy problem to cooperate with agents that have similar goals (as is implied by all of them being friendly). And even if we end up with a "friendly AI" that is incapable of establishing a monopoly on power but that will cause a great deal of destruction when another similarly capable but differently designed agent comes into existence, even if both agents have broadly similar goals (I would not call this a successful friendly AI), convincing people not to create such AIs does not actually get much easier if the people planning to create the AI have not been thinking about how to make it friendly, so preventing people from developing different theories of friendliness still doesn't help.

But beyond all that, I would also say that not creating many incomparable theories of friendliness is dangerous. If there is only one that anyone is working on, it will likely be misguided, and by the time anyone notices, enough time may have been wasted that friendliness will have lost too much ground in the race against general AI.

Comment author: Evan_Gaensbauer 14 May 2016 06:54:19AM 0 points [-]

Just pointing out I upvoted Turchin's comment above, but I agree with your clarification above here, of the last part of his comment. Nothing I've read thus far raises concern about warring superintelligences.

Comment author: turchin 13 May 2016 11:11:55AM 1 point [-]

I think that too much investment could result in more noise in the field. First of all because it will result in large number of published materials, which could exceed capacity of other researchers to read it. In result really interesting works will be not read. It will also attract in the field more people than actually clever and dedicated people exist. If we have 100 trained ai safety reserchers, which is overestimation , and we hire 1000 people, than real reasesrchers will be dissolved. In some fields like nanotech overinvestment result even in expel of original reaserchers because they prevent less educated ones to spent money as they want. But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

Comment author: Evan_Gaensbauer 14 May 2016 06:52:06AM 2 points [-]

Yeah, I read Eliezer's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk" in Global Catastrophic Risks, and it was impressed with how far in advance he anticipated reactions to the rising popularity of AI safety, what it might be like when the public finally switched from skepticism to genuine concern, and what it might start to look like. Eliezer has also anticipated even safety-conscious work on AI might increase AI risk.

The idea some existing institutions in AI safety, perhaps MIRI, should expand much faster than others so it can keep up with all the published material coming out, and evaluate it, is neglected.

Room For More Funding In AI Safety Is Highly Uncertain

12 Evan_Gaensbauer 12 May 2016 01:57PM

(Crossposted to the Effective Altruism Forum)


Introduction

In effective altruism, people talk about the room for more funding (RFMF) of various organizations. RFMF is simply the maximum amount of money which can be donated to an organization, and be put to good use, right now. In most cases, “right now” typically refers to the next (fiscal) year.  Most of the time when I see the phrase invoked, it’s to talk about individual charities, for example, one of Givewell’s top-recommended charities. If a charity has run out of room for more funding, it may be typical for effective donors to seek the next best option to donate to.
Last year, the Future of Life Institute (FLI) made the first of its grants from the pool of money it’s received as donations from Elon Musk and the Open Philanthropy Project (Open Phil). Since then, I've heard a few people speculating about how much RFMF the whole AI safety community has in general. I don't think that's a sensible question to ask before we have a sense of what the 'AI safety' field is. Before, people were commenting on only the RFMF of individual charities, and now they’re commenting of entire fields as though they’re well-defined. AI safety hasn’t necessarily reached peak RFMF just because MIRI has a runway for one more year to operate at their current capacity, or because FLI made a limited number of grants this year.

Overview of Current Funding For Some Projects


The starting point I used to think about this issue came from Topher Hallquist, from his post explaining his 2015 donations:

I’m feeling pretty cautious right now about donating to organizations focused on existential risk, especially after Elon Musk’s $10 million donation to the Future of Life Institute. Musk’s donation don’t necessarily mean there’s no room for more funding, but it certainly does mean that room for more funding is harder to find than it used to be. Furthermore, it’s difficult to evaluate the effectiveness of efforts in this space, so I think there’s a strong case for waiting to see what comes of this infusion of cash before committing more money.


My friend Andrew and I were discussing this last week. In past years, the Machine Intelligence Research Institute (MIRI) has raised about $1 million (USD) in funds, and received more than that  for their annual operations last year. Going into 2016, Nate Soares, Executive Director of MIRI, wrote the following:

Our successful summer fundraiser has helped determine how ambitious we’re making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly $1,825,000 per year [emphasis not added].


This seems sensible to me as it's not too much more than what they raised last year, and it seems more and not less money will be flowing into AI safety in the near future. However, Nate also had plans for how MIRI could've productively spent up to $6 million last year, to grow the organization. So, far from MIRI believing it had all the funding it could use, it was seeking more. Of course, others might argue MIRI or other AI safety organizations already receive enough funding relative to other priorities, but that is an argument for a different time.

Andrew and I also talked about how, had FLI had enough funding to grant money to all the promising applicants for its 2015 grants in AI safety research, that would have been millions more flowing into AI safety. It’s true what Topher wrote: that, being outside of FLI, and not otherwise being a major donor, it may be exceedingly difficult for individuals to evaluate funding gaps in AI safety. While FLI has only received $11 million to grant in 2015-16 ($6 million already granted in 2015, with $5 million more to be granted in the coming year), they could easily have granted more than twice that much, had they received the money.

To speak to other organizations, Niel Bowerman, Assistant Director at the Future of Humanity Institute (FH)I, recently spoke about how FHI receives most of its funding exclusively for research, and bottlenecks like the operations he runs more depend on private donations FHI could use more of.  Sean O HEigeartaigh, Executive Director at the Centre for the Study of Existential Risk (CSER), at Cambridge University, recently stated in discussion that CSER and the Leverhulme Centre for the Future of Intelligence (CFI), which CSER is currently helping launch, face the same problem with their operations. Nick Bostrom, author of Superintelligence, and Director of FHI, is in the course of launching the Strategic Artificial Intelligence Research Centre (SAIRC), which received $1.5 million (USD) in funding from FLI. SAIRC seems good for funding for at least the rest of 2016.

 


The Big Picture
Above are the funding summaries for several organizations listed in Andrew Critch’s 2015 map of the existential risk reduction ecosystem.There are organizations working on existential risks other than those from AI, but they aren’t explicitly organized in a network the same way AI safety organizations are. So, in practice, the ‘x-risk ecosystem’ is mapable almost exclusively in terms of AI safety.

It seems to me the 'AI safety field', if defined just as the organizations and projects listed in Dr. Critch’s ecosystem map, and perhaps others closely related (e.g., AI Impacts), could have productively absorbed between $10 million and $25 million in 2016 alone. Of course, there are caveats rendering this a conservative estimate. First of all, the above is a contrived version of the AI safety "field", as there is plenty of research outside of this network popping up all the time. Second, I think the organizations and projects I listed above could've themselves thought of more uses for funding. Seeing as they're working on what is (presumably) the most important problem in the world, there is much millions more could do for foundational research on the AGI containment/control problem, safety research into narrow systems aside.


Too Much Variance in Estimates for RFMF in AI Safety

I've also heard people setting the benchmark for truly appropriate funding for AI safety to be in the ballpark of a trillion dollars. While in theory that may be true, on its face it currently seems absurd. I'm not saying there won't be a time in even the next several years when $1 trillion/year couldn't be used effectively. I'm saying that if there isn't a roadmap for how to increase the productive use of ~$10 million/year to AI safety, to $100 million to $1 billion dollars, talking about $1 trillion/year isn't practical. I don't even think there will be more than $1 billion on the table per year for the near future.

This argument can be used to justify continued earning to give on the part of effective altruists. That is, there is so much money, e.g., MIRI could use, it makes sense for everyone who isn't an AI researcher to earn to give. This might make sense if governments and universities give major funding to what they think is AI safety, give 99% of it to only robotic unemployment or something, miss the boat on the control problem, and MIRI gets a pittance of the money that will flow into the field. The idea that there is effectively something like a multi-trillion dollar ceiling for effective funding for AI safety is still unsound.

When the range for RFMF for AI safety ranges between $5-10 million (the amount of funding AI safety received in 2015) and $1 trillion, I feel like anyone not already well-within the AI safety community cannot reasonably make an estimate of how much money the field can productively use in one year.
On the other hand, there are also people who think that AI safety doesn’t need to be a big priority, or is currently as big a priority as it needs to be, so money spent funding AI safety research and strategy would be better spent elsewhere.

All this stated, I myself don’t have a precise estimate of how much capacity for funding the whole AI safety field will have in, say, 2017.

Reasonable Assumptions Going Forward

What I'm confident saying right now is:

  1. The amount of money AI safety could've productively used in 2016 alone is within an order of magnitude of $10 million, and probably less than $25 million, based on what I currently know.
  2. The amount of total funding available will likely increase year over year for the next several years. There could be quite dramatic rises.. The Open Philanthropy Project, worth $10+ billion (USD), recently announced AI safety will be their top priority next year, although this may not necessarily translate into more major grants in the next 12 months. The White House recently announced they’ll be hosting workshops on the Future of Artificial Intelligence, including concerns over risk. Also, to quote Stuart Russell (HT Luke Muehlhauser): "Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s]." This includes companies like Facebook, Baidu, and Google each investing tons of money into AI research, including Google’s purchase of DeepMind for $500 million in 2014. With an increasing number of universities and corporations investing money and talent into AI research, including AI safety, and now with major philanthropic foundations and governments paying attention to AI safety as well, it seems plausible the amount of funding for AI safety worldwide might balloon up to $100+ million in 2017 or 2018. However, this could just as easily not happen, and there's much uncertainty in projecting this.
  3. The field of AI safety will also grow year over year for the next several years. I doubt projects needing funding will grow as fast as the amount of funding available. This is because the rate at which institutions are willing to invest in growth will not only depend on how much money they're receiving now, but how much they can expect to receive in the future. Since how much those expectations reasonably vary is so uncertain, organizations are smartly conservative to hold their cards close to their chest. While OpenAI has pledged $1 billion for funding AI research in general, and not just safety, over the next couple decades, nobody knows if such funding will be available to organizations out of Oxford or Berkeley like AI Impacts MIRI, FHI or CFI. However,

 

  • i) increased awareness and concern over AI safety will draw in more researchers.
  • ii) the promise or expectation of more money to come may draw in more researchers seeking funding.
  • iii) the expanding field and the increased funding available will create a feedback loop in which institutions in AI safety, such as MIRI, make contingency plans to expand faster, if able to or need be.

Why This Matters

I don't mean to use the amount of funding AI safety has received in 2015 or 2016 as an anchor which will bias how much RFMF I think the field has. However, it seems more extreme lower or upper estimates I’ve encountered are baseless, and either vastly underestimate or overestimate how much the field of AI safety can productively grow each year. This is actually important to figure out.

80,000 Hours rates AI safety as perhaps the most important and neglected cause currently prioritized by the effective altruism movement. Consequently, 80,000 Hours recommends how similarly concerned people can work on the issue. Some talented computer scientists who could do best working in AI safety might opt to earn to give in software engineering or data science, if they conclude the bottleneck on AI safety isn’t talent but funding. Alternatively, small but critical organization which requires funding from value-aligned and consistent donors might fall through the cracks if too many people conclude all AI safety work in general is receiving sufficient funding, and chooses to forgo donating to AI safety. Many of us could make individual decisions going either way, but it also seems many of us could end up making the wrong choice. Assessments of these issues will practically inform decisions many of make over the next few years, determining how much of our time and potential we use fruitfully, or waste.

Everything above just lays out how estimating room for more funding in AI safety overall may be harder than anticipated, and to show how high the variance might be. I invite you to contribute to this discussion, as it only just starting. Please use the above info as a starting point to look into this more, or ask questions that will usefully clarify what we’re thinking about. The best fora to start further discussion seem to be the Effective Altruism Forum, LessWrong, or the AI Safety Discussion group on Facebook, where I initiated the conversation leading to this post.

Comment author: [deleted] 04 April 2016 06:04:21PM 18 points [-]

Sorry to complain, but I opened the site to see what was going on, and Main has gone to utter crap.

"Is spirituality irrational?" and "3 reasons it's irrational to demand 'rationalism' in social-justice activism" are now heavily-commented recent posts in Main. Meanwhile, "Building Machines That Learn and Think Like People" was published a short while ago, and nothing about it appears on this site.

Looks like this site has slid into the River of Low Domain-Knowledge, Easy-to-Discuss General Stuff, rather than staying up in the nice Forest of Stuff LW Purports to be About.

In response to comment by [deleted] on Open Thread April 4 - April 10, 2016
Comment author: Evan_Gaensbauer 05 April 2016 09:03:34AM 15 points [-]

Context: Main is currently disabled; LessWrong 2.0

LessWrong is actively being redesigned. Until further notice, posts to Main have been disabled. Once the redesign is complete, LW may have multiple subs, none of which might be called 'Main', but one or more of which will be designated as where the nice Forest of Classic LW Stuff you're hoping to find here. The only posts in Main recently are meetup posts and the survey, which were promoted there for visibility. Apparently, usage statistics show for the last several months Discussion has been getting much more attention than Main, so Discussion is where non-crap is. Of course, there is no more explicit division between crap and non-crap you'd expect the 'Main'/'Discussion' divide to reflect. Try finding other ways to filter out crap, like reading the top posts from the previous week.

Comment author: Brillyant 29 March 2016 09:22:54PM *  4 points [-]

It seems to me, despite talk of change, LW is staying essentially the same... and thereby struggling at an accelerating rate to be a place for useful content.

My current modus operandi for LW is to use the LW favorite I have in place to (1) Check SSC and the other "Rationality Blogs" on the side bar, and then (2) peruse discussion (and sometimes comment) if there isn't a new post at SSC, et al that commands my attention. I wonder if other LWers do the same? I wonder what percentage of LW traffic is "secondary" in a way similar to what I've described?

I like your suggestion because it is a radical change that might work. And it's bad to do nothing if what you are doing seems to be on a trajectory of death.

At some point, during a "how can we make LW better" post on here, I mentioned making LW a de facto "hub" for the rationality blogosphere since it's increasingly not anything else. I'm now re-saying that and seconding your idea. There could still be original content... but there is nowhere close to enough original content coming in right now to justify LW as a standalone site.

Comment author: Evan_Gaensbauer 31 March 2016 10:54:00PM 2 points [-]

My current modus operandi for LW is to use the LW favorite I have in place to (1) Check SSC and the other "Rationality Blogs" on the side bar, and then (2) peruse discussion (and sometimes comment) if there isn't a new post at SSC, et al that commands my attention. I wonder if other LWers do the same? I wonder what percentage of LW traffic is "secondary" in a way similar to what I've described?

As a a data point, this is exactly how I've been using LessWrong for at least the last year. One of the reasons I more frequently comment in open threads is because we can have less idle conversations like this one as well :P

Comment author: moridinamael 29 March 2016 02:26:30PM 13 points [-]

Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?

I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.

One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.

I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.

I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.

Comment author: Evan_Gaensbauer 31 March 2016 10:52:07PM 3 points [-]

Rob Bensinger published Library of Scott Alexandria, his summary/"Sequences" of the historically best posts from Scott (according to Rob, that is). Scott seems to pursue or write on topics with a common thread between them in cycles of a few months. This can be observed in the "top posts" section of his blog. Sometimes I forget a blog exists for a few months, so I don't read it, but when I do read diaspora/rationality-adjacent blogs, I consider the reading personally valuable. I'd appreciate LessWrong users sharing pieces from their favourite blogs that they believe would also appeal to many users here. So, to make a top-level post linking to several articles from one author once in a while, sharing their best recent posts which would be relevant to LessWrong's interests, seems reasonable. I agree making a top-level post for any one or all links from a separate blog would be too much, and that this implicit norm should continue to exist.

Comment author: Evan_Gaensbauer 29 March 2016 01:03:21PM 6 points [-]

[LINK]

Slate Star Codex Open Thread

There seems like some relevant stuff this week:

  • Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they've fallen on some hard times. I don't know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.

  • Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yourself reading lots of SSC links this week.

  • Scott is moving back to the Bay Area next year, and is looking for doctors from the area to talk to about setting himself up with a job as a psychiatrist.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Evan_Gaensbauer 26 March 2016 08:50:49AM 42 points [-]

I have taken the survey.

Comment author: Evan_Gaensbauer 16 March 2016 07:31:12AM 7 points [-]

Upvoted for sharing unique experiences for their learning potential. I recall Luke Muehlhauser attended a Toastmasters meetup run by Scientologists several years ago when he first moved to California. This was unrelated to the article, but as an aside he discouraged other LessWrong users to attend any meeting run by Scientologists just because he did, because they are friendly and they will hack people's System 1's into making them want to come back, and even being enticed to join Scientology is not a worthwhile risk, and the best case is you might just waste your time with them anyway. I mean, IIRC, this was after Luke himself had left evangelical Christianity, and read the LessWrong Sequences, so I guess we was very confident he wouldn't be pulled in.

It's interesting that you went, but if you were invited by a stranger on a plane to this home, I hardly think you "infiltrated", as opposed to being invited by a Raelian on the first step to join them. I'm not saying you'll be fooled into joining, but I caution against going back, as you could at least use the time to find other friendly communities to join, like any number of meetups, which aren't cults. It's sad other are in this cult, but it's difficult enough to pull others out I'm not confident it's worth sticking around to pull others out, even if you think they're good people. When you get back Stateside or wherever your'e from, I figure there are skeptics associations you can get involved with which do good work on helping people believe less crazy things.

Comment author: Evan_Gaensbauer 16 March 2016 07:22:33AM 2 points [-]

If you have ever wondered how it is possible that a flying saucer cult has more members than EA, now it's time to learn something.

One sentiment from a friend of mine that I don't completely agree with but I believe is worth keeping in mind is that effective altruism (EA) is about helping others and isn't meant to become a "country club for saints". What does that have to do with Raelianism, or Scientology, or some other cult? Well, they tend to treat their members like saints, and their members aren't effective. I mean, these organizations may be effective by one metric in that they're able to efficiently funnel capital/wealth (e.g., financial, social, material, sexual, etc.) to their leaders. I'm aware of Raelianism, and I don't know much about it. From what I've read about Scientology, it's able to get quite a lot done. However, they're able to get away with that when they don't follow rules, bully everyone from their detractors to whole governments, and brainwash people into becoming their menial slaves. The epistemic hygiene in these groups are abysmal.

I think there are many onlookers from LessWrong who are hoping much of effective altruism gets better epistemics than they have now, and would be utterly aghast if they were selling this out to use whatever tools from the dark arts to make gains in raw numbers of self-identified adherents who cannot think or act for themselves. Being someone quite involved in EA, I can tell you that EA should grow as fast as possible, or that the priority is to make anyone who willing to become passionate about it to feel as welcome as possible, isn't worth it if the expense is the quality culture of the movement, to the extent it has a quality culture of epistemic hygiene. So, sure, we could learn lessons from ufo-cults, but they would be the wrong lessons. Having as many people in EA as possible isn't the most important thing for EA to do.

View more: Prev | Next