Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

MIRI's 2016 Fundraiser

17 So8res 25 September 2016 04:55PM

Our 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):  


Donate Now

Employer matching and pledges to give later this year also count towards the total. Click here to learn more.


  MIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world. 2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:

We also published new results in decision theory and logical uncertainty, including “Parametric bounded Löb’s theorem and robust cooperation of bounded agents” and “A formal solution to the grain of truth problem.” For a survey of our research progress and other updates from last year, see our 2015 review. In the last three weeks, there have been three more major developments:

  • We released a new paper, “Logical induction,” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.
  • The Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.
  • The Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.

Things have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.  

The strategic landscape

Humans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed -- our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics. Separate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups. Our long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:

  • In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.
  • In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.
  • In the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.
  • In the very long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity -- take our time to dot every i and cross every t before we risk "locking in" design choices.

The above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in. For more on our research focus and methodology, see our research page and MIRI’s Approach.  

Our organizational plans

We currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode. Our budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:


Basic target - $750,000. We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.


Growth target - $1,000,000. This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.


Stretch target - $1,250,000. At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5


If we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires. As always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.  

Donate Now

or

Pledge to Give

 

1 This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative. (back)

2 We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year. (back)

3 We're imagining continuing to run one fundraiser per year in future years, possibly in September. (back)

4 Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants. For comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser. (back)

5 At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details. (back)

Meetup : Bay Area Winter Solstice 2016

2 ialdabaoth 22 September 2016 10:12PM

Discussion article for the meetup : Bay Area Winter Solstice 2016

WHEN: 17 December 2016 07:00:00PM (-0700)

WHERE: Hillside Club of North Berkeley - 2286 Cedar St, Berkeley, CA 94709

It's time to gather together and remember the true Reasons for the Season: axial tilt, orbital mechanics and other vast-yet-comprehensible forces have converged together to bring another year to a close, and as the days grow shorter and colder we remember how profoundly lucky we are to have been forged by blind, impersonal forces into beings that can understand, and wonder, and appreciate ourselves and each other.

This year's East Bay Rationalist Winter Solstice will be held in the center of Berkeley, bringing 300 rationalists together in a theatre hall for food, songs, speeches, and conversations.

We encourage other Bay denizens who can't make our solstice to put on their own show. Or even if you do come, we encourage people to try out their own ideas. The East Bay Solstice celebration will be on Saturday, December 17th, in the Anna Head Alumnae Hall in Berkeley.

Acquire tickets here: https://www.eventbrite.com/e/2016-bay-area-winter-solstice-tickets-27853776395

We are coordinating with the Bayesian Choir and will be coordinating with various speakers, as in previous years. An MC and schedule will be posted as details solidify.

Kids are welcome. Vegetarian food will be available. Let us know if you have specific accomadation requests or have questions.

Discussion article for the meetup : Bay Area Winter Solstice 2016

Meetup : Boise, ID Meetup

2 helldalgo 05 July 2016 08:41PM

Discussion article for the meetup : Boise, ID Meetup

WHEN: 24 July 2016 02:30:00PM (-0600)

WHERE: Blue Cow Frozen Yogurt 2333 S Apple St, Boise, ID 83706

Idaho exists! This is the first Boise meetup I can find evidence of.

Topic: Introductions, getting to know each other. What brought you here?

Discussion article for the meetup : Boise, ID Meetup

Meetup : Welcome Scott Aaronson to Texas

1 Vaniver 25 July 2016 01:27AM

Discussion article for the meetup : Welcome Scott Aaronson to Texas

WHEN: 13 August 2016 06:00:00PM (-0500)

WHERE: 4212 Hookbilled Kite Dr Austin, TX 78738

We're having another all-Texas party in Austin. We'll be welcoming Scott Aaronson, who's moved here to teach at UT Austin.

(Previously, we were worried that the time might work, but now it's been confirmed.)

Discussion article for the meetup : Welcome Scott Aaronson to Texas

Meetup : Stockholm: When to stop making a decision

0 pepe_prime 26 September 2016 05:14PM

Discussion article for the meetup : Stockholm: When to stop making a decision

WHEN: 07 October 2016 03:00:00PM (+0200)

WHERE: Lindstedtsvägen 3, Room 1537

We'll run monthly meetups starting with this one. This talk is the start of a series on decision analysis for personal life. If you want to influence or organize future Stockholm meetups, let me know.

The talk will introduce a bit of notation, but mostly be an informal presentation. After 30 minutes the event will open to audience discussion.

Discussion article for the meetup : Stockholm: When to stop making a decision

Meetup : LW Melbourne: A Bayesian Guide on How to Read a Scientific Paper

0 Chriswaterguy 24 September 2016 08:57AM

Discussion article for the meetup : LW Melbourne: A Bayesian Guide on How to Read a Scientific Paper

WHEN: 08 October 2016 02:30:00PM (+1000)

WHERE: 247 Flinders Lane, Melbourne

PLEASE NOTE: START TIME IS 3:30 PM. For some reason it's displaying incorrectly.

The next meetup for Less Wrong Melbourne will be on Saturday 8th October 2016. Turn up at 3:30pm to socialise, and we start at 4pm sharp.

THIS WEEK: A Bayesian Guide on how to read a Scientific Paper - with Mark Harrigan.

This talk will provide a guide on how to successfully read and make sense of a published Scientific Paper in the peer reviewed literature, regardless of your discipline expertise. It includes a Bayesian guide to assessing the probability that the conclusions of the paper are credible and a step by step guide outlining how to approach reading, understanding and assessing the conclusions.

It won't make you an instant expert in a field where you don't have much experience, nor will it guarantee that you will spot faulty science - but it will improve your skills in understanding how published science is presented and improve your ability to weigh the credibility of its conclusions.

DINNER AFTERWARDS: At about 6:30pm some of us generally head for dinner in the CBD, and you are welcome to join us.

WHEN: Saturday 8th October, 3:30pm to 6:30pm

WHERE: Ross House, 247 Flinders Lane. Level 1, Room 1. (The room will be noted in a sign on the front door.) If you have any trouble finding the venue or getting in, text or call Chris on 0439 471 632. For the greatest convenience, prompt service and general awesomeness, please arrive around 3:30pm.

WHAT'S THIS ABOUT?: The Less Wrong Melbourne Rationality Dojos are self-improvement sessions for those committed to the art of rationality and personal growth. We welcome new members who are interested in exploring rationality. We're "aspiring rationalists" and always open to learn.

Discussion article for the meetup : LW Melbourne: A Bayesian Guide on How to Read a Scientific Paper

Meetup : Washington, D.C.: Outdoor Fun & Games

0 RobinZ 23 September 2016 03:11PM

Discussion article for the meetup : Washington, D.C.: Outdoor Fun & Games

WHEN: 25 September 2016 03:30:00PM (-0400)

WHERE: Meridian Hill Park

We will be going to the park to hang out, play games, and have fun conversation in the park.

Location: Meridian Hill Park is around 16th & W Streets NW (Google Maps); it's a few blocks away from the Columbia Heights or Woodley Park-Zoo Metro stations. We will be meeting near this entrance, toward the southwest corner.

Upcoming meetups:

  • Oct. 2: Math Discussion
  • Oct. 9: Games Discussion
  • Oct. 16: Fun & Games

Discussion article for the meetup : Washington, D.C.: Outdoor Fun & Games

Meetup : Baltimore Area / UMBC Weekly Meetup

0 iarwain1 23 September 2016 01:55PM

Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup

WHEN: 25 September 2016 08:00:00PM (-0400)

WHERE: Performing Arts and Humanities Bldg Room 456, 1000 Hilltop Cir, Baltimore, MD 21250

Meeting is on 4th floor of the Performing Arts and Humanities Building. Permit parking designations do not apply on weekends, so park pretty much wherever you want.

Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup

Meetup : SF Meetup: Mini Talks

0 maia 22 September 2016 03:23PM

Discussion article for the meetup : SF Meetup: Mini Talks

WHEN: 26 September 2016 06:15:00PM (-0700)

WHERE: 1597 Howard St., SF

We’ll be meeting to give and listen to very short talks!

We’ll do 7-minute lightning talks with 3 additional minutes allowed for questions. We’ll also limit the number of programming-related talks to no more than half of all talks, in order to promote variety.

A talk doesn’t have to be formal, planned, or even something that you’d expect someone to Give A Talk About; it can be as simple as telling the group about something you find interesting or cool. In the past, we’ve had people talk about topics like: how complicated the process of organizing fresh food for airplane flights is, their experience volunteering for a local political campaign, a video game they were designing and writing, and many others.

We don't expect any sort of preparation or practice for these kinds of talks. They're very casual and the expectations are low. If your talk isn't great, it's okay because we'll just move on to another one in a few minutes. If it helps, think of it this way: you're just being given the conversational floor for a few minutes, in a slightly more organized way than usual.

For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): three zero one, three five six, five four two four.

Format:

We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic.

About these meetups:

The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome.

We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it.

Discussion article for the meetup : SF Meetup: Mini Talks

Meetup : Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos

0 Alexander230 20 September 2016 07:33PM

Discussion article for the meetup : Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos

WHEN: 28 September 2016 07:40:00PM (+0300)

WHERE: Moscow, B.Dorogomilovskaya, 5-2

Welcome to Moscow LW community makeshift games! In that games, some rationality skills are involved, so you can practise while you playing!

  • FallacyMania: it is a game where you guess logical fallacies in arguments, or practise using logical fallacies yourself (depending on team in which you will be).

Details about the game: http://goo.gl/BtRVhB

  • Zendo: table game with guessing the rules about items placement on the table.

Game rules: https://goo.gl/RW2Fx7

  • Tower of Chaos: funny game with guessing the rules of human placement on a Twister mat.

Game rules: https://goo.gl/u9qgc3

Come to antikafe "Kocherga", ul.B.Dorogomilovskaya, 5-2. The map is here: http://kocherga-club.ru/#contacts . Nearest metro station is Kievskaya. If you are lost, call Sasha at +7-905-527-30-82.

Games begin at 19:40, the length is 3.5 hours.

Discussion article for the meetup : Games in Kocherga club: FallacyMania, Zendo, Tower of Chaos

View more: Next