Meetup : San Francisco Meetup: Rationality Diary

0 rocurley 13 October 2016 05:18AM

Discussion article for the meetup : San Francisco Meetup: Rationality Diary

WHEN: 17 October 2016 06:15:00PM (-0700)

WHERE: 1769 15th St, San Francisco, CA

We'll be meeting to tell stories about when we tried to solve a problem in our lives, and how it went.

For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): three zero one, three five six, five four two four.

Format:

We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic.

About these meetups:

The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome.

We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it.

Discussion article for the meetup : San Francisco Meetup: Rationality Diary

Meetup : Washington, D.C.: Fun & Games

0 RobinZ 12 October 2016 12:02AM

Discussion article for the meetup : Washington, D.C.: Fun & Games

WHEN: 16 October 2016 03:30:00PM (-0400)

WHERE: Donald W. Reynolds Center for American Art and Portraiture

We will be meeting in the courtyard to hang out, play games, and engage in fun conversation.

Upcoming meetups:

  • Oct. 23: Communication
  • Oct. 30: Halloween Party

Discussion article for the meetup : Washington, D.C.: Fun & Games

Meetup : St. Petersburg Weekly / Biweekly Meetup

0 AvImd 10 October 2016 05:53PM

Discussion article for the meetup : St. Petersburg Weekly / Biweekly Meetup

WHEN: 09 October 2021 06:00:00PM (+0300)

WHERE: St. Petersburg, Russia, ITMO University, пр. Кронверкский, 49

We meet every week or every other week. Actual date, time and location are announced on VK and Meetup.com. Applied rationality workshop: vk event. Regular meetings: vk community, vk event. Meetup.com page.

Discussion article for the meetup : St. Petersburg Weekly / Biweekly Meetup

Meetup : San Francisco Meetup: Projects

0 rocurley 07 October 2016 05:27AM

Discussion article for the meetup : San Francisco Meetup: Projects

WHEN: 10 October 2016 06:15:00PM (-0700)

WHERE: 1769 15th St, San Francisco, CA

We’ll be meeting to work on projects!

Near the beginning, we’ll go around and talk about what we’ll be working on, then do a couple of pomodoros quietly. At some point we’ll break into general conversations and socializing.

For help getting into the building, please call (or text, with a likely-somewhat-slower response rate): 301-458-0764

Format:

We meet and start hanging out at 6:15, but don’t officially start doing the meetup topic until 6:45-7 to accommodate stragglers. Usually there is a food order that goes out before we start the meetup topic.

About these meetups:

The mission of the SF LessWrong meetup is to provide a fun, low-key social space with some structured interaction, where new and non-new community members can mingle and have interesting conversations. Everyone is welcome.

We explicitly encourage people to split off from the main conversation or diverge from the topic if that would be more fun for them (moving side conversations into a separate part of the space if appropriate). Meetup topics are here as a tool to facilitate fun interaction, and we certainly don’t want them to inhibit it.

Discussion article for the meetup : San Francisco Meetup: Projects

Meetup : Washington, D.C.: Games Discussion

0 RobinZ 07 October 2016 01:08AM

Discussion article for the meetup : Washington, D.C.: Games Discussion

WHEN: 09 October 2016 03:30:00PM (-0400)

WHERE: Donald W. Reynolds Center for American Art and Portraiture

We will be meeting in the courtyard to talk about games and game-related topics.

Upcoming meetups:

  • Oct. 16: Fun & Games
  • Oct. 23: Communication

Discussion article for the meetup : Washington, D.C.: Games Discussion

Meetup : Baltimore Area / UMBC Weekly Meetup

0 iarwain1 07 October 2016 12:30AM

Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup

WHEN: 09 October 2016 08:00:00PM (-0400)

WHERE: Performing Arts and Humanities Bldg Room 456, 1000 Hilltop Cir, Baltimore, MD 21250

Meeting is on 4th floor of the Performing Arts and Humanities Building. Permit parking designations do not apply on weekends, so park pretty much wherever you want.

Discussion article for the meetup : Baltimore Area / UMBC Weekly Meetup

Meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability

0 berekuk 05 October 2016 07:32PM

Discussion article for the meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability

WHEN: 09 October 2016 02:00:00PM (+0300)

WHERE: Москва, ул. Большая Дорогомиловская, д.5к2

Note: most our members join meetups via other channels. Still, the correlation between "found out about Moscow meetups via lesswrong.com" and "is a great fit for our community" is very high. So we're posting just a short link to the hackpad document with the schedule here instead of the full translation of the announcement into English.

Pad with the details about 09.10.2016 meetup.

We're meeting at the "Kocherga" anticafe, as usual.

Discussion article for the meetup : Moscow: rational review, bias busters, Kolmogorov and Jayes probability

Meetup : Stockholm: When to stop making a decision

0 pepe_prime 26 September 2016 05:14PM

Discussion article for the meetup : Stockholm: When to stop making a decision

WHEN: 07 October 2016 03:00:00PM (+0200)

WHERE: Lindstedtsvägen 3, Room 1537

We'll run monthly meetups starting with this one. This talk is the start of a series on decision analysis for personal life. If you want to influence or organize future Stockholm meetups, let me know.

The talk will introduce a bit of notation, but mostly be an informal presentation. After 30 minutes the event will open to audience discussion.

Discussion article for the meetup : Stockholm: When to stop making a decision

MIRI's 2016 Fundraiser

18 So8res 25 September 2016 04:55PM

Our 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):  

 


Donate Now

Employer matching and pledges to give later this year also count towards the total. Click here to learn more.


 

MIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world. 2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:

We also published new results in decision theory and logical uncertainty, including “Parametric bounded Löb’s theorem and robust cooperation of bounded agents” and “A formal solution to the grain of truth problem.” For a survey of our research progress and other updates from last year, see our 2015 review. In the last three weeks, there have been three more major developments:

  • We released a new paper, “Logical induction,” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.
  • The Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.
  • The Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.

Things have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.  

The strategic landscape

Humans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed -- our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics. Separate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups. Our long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:

  • In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.
  • In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.
  • In the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.
  • In the very long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity -- take our time to dot every i and cross every t before we risk "locking in" design choices.

The above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in. For more on our research focus and methodology, see our research page and MIRI’s Approach.  

Our organizational plans

We currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode. Our budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:


Basic target - $750,000. We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.


Growth target - $1,000,000. This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.


Stretch target - $1,250,000. At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5


If we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires. As always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.  

Donate Now

or

Pledge to Give

 

1 This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative. (back)

2 We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year. (back)

3 We're imagining continuing to run one fundraiser per year in future years, possibly in September. (back)

4 Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants. For comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser. (back)

5 At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details. (back)

Meetup : LW Melbourne: A Bayesian Guide on How to Read a Scientific Paper

0 Chriswaterguy 24 September 2016 08:57AM

Discussion article for the meetup : LW Melbourne: A Bayesian Guide on How to Read a Scientific Paper

WHEN: 08 October 2016 02:30:00PM (+1000)

WHERE: 247 Flinders Lane, Melbourne

PLEASE NOTE: START TIME IS 3:30 PM. For some reason it's displaying incorrectly.

The next meetup for Less Wrong Melbourne will be on Saturday 8th October 2016. Turn up at 3:30pm to socialise, and we start at 4pm sharp.

THIS WEEK: A Bayesian Guide on how to read a Scientific Paper - with Mark Harrigan.

This talk will provide a guide on how to successfully read and make sense of a published Scientific Paper in the peer reviewed literature, regardless of your discipline expertise. It includes a Bayesian guide to assessing the probability that the conclusions of the paper are credible and a step by step guide outlining how to approach reading, understanding and assessing the conclusions.

It won't make you an instant expert in a field where you don't have much experience, nor will it guarantee that you will spot faulty science - but it will improve your skills in understanding how published science is presented and improve your ability to weigh the credibility of its conclusions.

DINNER AFTERWARDS: At about 6:30pm some of us generally head for dinner in the CBD, and you are welcome to join us.

WHEN: Saturday 8th October, 3:30pm to 6:30pm

WHERE: Ross House, 247 Flinders Lane. Level 1, Room 1. (The room will be noted in a sign on the front door.) If you have any trouble finding the venue or getting in, text or call Chris on 0439 471 632. For the greatest convenience, prompt service and general awesomeness, please arrive around 3:30pm.

WHAT'S THIS ABOUT?: The Less Wrong Melbourne Rationality Dojos are self-improvement sessions for those committed to the art of rationality and personal growth. We welcome new members who are interested in exploring rationality. We're "aspiring rationalists" and always open to learn.

Discussion article for the meetup : LW Melbourne: A Bayesian Guide on How to Read a Scientific Paper

View more: Next