Filter Last three months

Recent updates to gwern.net (2015-2016)

28 gwern 26 August 2016 07:22PM

Previously: 2011; 2012-2013; 2013-2014; 2014-2015

"When I was one-and-twenty / I heard a wise man say, / 'Give crowns and pounds and guineas / But not your heart away; / Give pearls away and rubies / But keep your fancy free.' / But I was one-and-twenty, / No use to talk to me."

My past year of completed writings, sorted by topic:

Genetics:

  • Embryo selection for intelligence cost-benefit analysis
    • meta-analysis of intelligence GCTAs, limits set by measurement error, current polygenic scores, possible gains with current IVF procedures, the benefits of selection on multiple complex traits, the possible annual value in the USA of selection & value of larger GWASes, societal consequences of various embryo selection scenarios, embryo count versus polygenic scores as limiting factors, comparison with iterated embryo selection, limits to total gains from iterated embryo selection etc.
  • Wikipedia article on Genome-wide complex trait analysis (GCTA)

AI:

Biology:

Statistics:

Cryptography:

Misc:

gwern.net itself has remained largely stable (some CSS fixes and image size changes); I continue to use Patreon and send out my newsletters.

Linkposts now live!

26 Vaniver 28 September 2016 03:13PM

 

You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.

Some general norms, subject to change:

 

  1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
  2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
  3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
  4. It's not okay to post duplicates.

As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.

(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.)

Now is the time to eliminate mosquitoes

21 James_Miller 06 August 2016 07:10PM

“In 2015, there were roughly 214 million malaria cases and an estimated 438 000 malaria deaths.”  While we don’t know how many humans malaria has killed, an estimate of half of everyone who has ever died isn’t absurd.  Because few people in rich countries get malaria, pharmaceutical companies put relatively few resources into combating it.   

 

The best way to eliminate malaria is probably to use gene drives to completely eradicate the species of mosquitoes that bite humans, but until recently rich countries haven’t been motivated to such xenocide.  The Zika virus, which is in mosquitoes in the United States, provides effective altruists with an opportunity to advocate for exterminating all species of mosquitoes that spread disease to humans because the horrifying and disgusting pictures of babies with Zika might make the American public receptive to our arguments.  A leading short-term goal of effective altruists, I propose, should be advocating for mosquito eradication in the short window before rich people get acclimated to pictures of Zika babies.   

 

Personally, I have (unsuccessfully) pitched articles on mosquito eradication to two magazines and (with a bit more success) emailed someone who knows someone who knows someone in the Trump campaign to attempt to get the candidate to come out in favor of mosquito eradication.  What have you done?   Given the enormous harm mosquitoes inflict on mankind, doing just a little (such as writing a blog post) could have a high expected payoff.

 

Deepmind Plans for Rat-Level AI

20 moridinamael 18 August 2016 04:26PM

Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.

I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.

If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.

MIRI's 2016 Fundraiser

18 So8res 25 September 2016 04:55PM

Our 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):  

 


Donate Now

Employer matching and pledges to give later this year also count towards the total. Click here to learn more.


 

MIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world. 2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:

We also published new results in decision theory and logical uncertainty, including “Parametric bounded Löb’s theorem and robust cooperation of bounded agents” and “A formal solution to the grain of truth problem.” For a survey of our research progress and other updates from last year, see our 2015 review. In the last three weeks, there have been three more major developments:

  • We released a new paper, “Logical induction,” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.
  • The Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.
  • The Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.

Things have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.  

The strategic landscape

Humans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed -- our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics. Separate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups. Our long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:

  • In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.
  • In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.
  • In the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.
  • In the very long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity -- take our time to dot every i and cross every t before we risk "locking in" design choices.

The above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in. For more on our research focus and methodology, see our research page and MIRI’s Approach.  

Our organizational plans

We currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode. Our budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:


Basic target - $750,000. We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.


Growth target - $1,000,000. This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.


Stretch target - $1,250,000. At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5


If we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires. As always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.  

Donate Now

or

Pledge to Give

 

1 This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative. (back)

2 We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year. (back)

3 We're imagining continuing to run one fundraiser per year in future years, possibly in September. (back)

4 Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants. For comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser. (back)

5 At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details. (back)

Astrobiology III: Why Earth?

17 CellBioGuy 04 October 2016 09:59PM

After many tribulations, my astrobiology bloggery is back up and running using Wordpress rather than Blogger because Blogger is completely unusable these days.  I've taken the opportunity of the move to make better graphs for my old posts. 

"The Solar System: Why Earth?"

https://thegreatatuin.wordpress.com/2016/10/03/the-solar-system-why-earth/

Here, I try to look at our own solar system and what the presence of only ONE known biosphere, here on Earth, tells us about life and perhaps more importantly what it does not.  In particular, I explore what aspects of Earth make it special and I make the distinction between a big biosphere here on Earth that has utterly rebuilt the geochemistry and a smaller biosphere living off smaller amounts of energy that we probably would never notice elsewhere in our own solar system given the evidence at hand. 

Commentary appreciated.

 

 

Previous works:

Space and Time, Part I

https://thegreatatuin.wordpress.com/2016/09/25/space-and-time-part-i

Space and Time, Part II

https://thegreatatuin.wordpress.com/2016/09/25/space-and-time-part-ii

The 12 Second Rule (i.e. think before answering) and other Epistemic Norms

17 Raemon 05 September 2016 11:08PM

Epistemic Status/Effort: I'm 85% confident this is a good idea, and that the broader idea is at least a good direction. Have gotten feedback from a few people and spend some time actively thinking through ramifications of it. Interested in more feedback.

TLDR:

1) When asking a group a question, i.e. "what do you think about X?", ask people to wait 12 seconds, to give each other time to think. If you notice someone else ask a question and people immediately answering, suggest people pause the conversation until people have had some time to think. (Probably specific mention "12 second rule" to give people a handy tag to remember)

2) In general, look for opportunities to improve or share social norms that'll help your community think more clearly, and show appreciation when others do so (i.e. "Epistemic Norms")

(this was originally conceived for the self-described "rationality" community, but I think is a good idea any group that'd like to improve their critical thinking as well as creativity.)

There are three reasons the 12-second rule seems important to me:

  • On an individual level, it makes it easier to think of the best answer, rather than going with your cached thought.
  • On the group level, it makes it easier to prevent anchoring/conformity/priming effects.
  • Also on the group level, it means that people take longer to think of answers get to practice actually thinking for themselves
If you're using it with people who aren't familiar with it, make sure to briefly summarize what you're doing and why.

Elaboration:

While visiting rationalist friends in SF, I was participating in a small conversation (about six participants) in which someone asked a question. Immediately, one person said "I think Y. Or maybe Z." A couple other people said "Yeah. Y or Z, or... maybe W or V?" But the conversation was already anchored around the initial answers.

I said "hey, shouldn't we stop to each think first?" (this happens to be a thing my friends in NYC do). And I was somewhat surprised that the response was more like "oh, I guess that's a good idea" than "oh yeah whoops I forgot."

It seemed like a fairly obvious social norm for a community that prides itself on rationality, and while the question wasn't *super* important, I think its helpful to practice this sort of social norm on a day-to-day basis.

This prompted some broader questions - it occurred to me there were likely norms and ideas other people had developed in their local networks that I probably wasn't aware of. Given that there's no central authority on "good epistemic norms", how do we develop them and get them to spread? There's a couple people with popular blogs who sometimes propose new norms which maybe catch on, and some people still sharing good ideas on Less Wrong, effective-altruism.com, or facebook. But it doesn't seem like those ideas necessarily reach saturation.

Atrophied Skills

The first three years I spent in the rationality community, my perception is that my strategic thinking and ability to think through complex problems actually *deteriorated*. It's possible that I was just surrounded by smarter people than me for the first time, but I'm fairly confident that I specifically acquired the habit of "when I need help thinking through a problem, the first step is not to think about it myself, but to ask smart people around me for help."

Eventually I was hired by a startup, and I found myself in a position where the default course for the company was to leave some important value on the table. (I was working in an EA-adjaecent company, and wanted to push it in a more Effective Altruism-y direction with higher rigor). There was nobody else I could turn to for help. I had to think through what "better epistemic rigor" actually meant and how to apply it in this situation.

Whether or not my rationality had atrophied in the past 3 years, I'm certain that for the first time in long while, certain mental muscles *flexed* that I hadn't been using. Ultimately I don't know whether my ideas had a noteworthy effect on the company, but I do know that I felt more empowered and excited to improve my own rationality. 

I realized that, in the NYC meetups, quicker-thinking people tended to say what they thought immediately when a question was asked, and this meant that most of the people in the meetup didn't get to practice thinking through complex questions. So I started asking people to wait for a while before answering - sometimes 5 minutes, sometimes just a few seconds.

"12 seconds" seems like a nice rule-of-thumb to avoid completely interrupting the flow of conversation, while still having some time to reflect, and make sure you're not just shouting out a cached thought. It's a non-standard number which is hopefully easier to remember.

(That said, a more nuanced alternative is "everyone takes a moment to think until they feel like they're hitting diminishing returns on thinking or it's not worth further halting the conversation, and then raising a finger to indicate that they're done")

Meta Point: Observation, Improvement and Sharing

The 12-second rule isn't the main point though - just one of many ways this community could do a better job of helping both newcomers and old-timers hone their thinking skills. "Rationality" is supposed to be our thing. I think we should all be on the lookout for opportunities to improve our collective ability to think clearly. 

I think specific conversational habits are helpful both for their concrete, immediate benefits, as well as an opportunity to remind everyone (newcomers and old-timers alike) that we're trying to actively improve in this area.

I have more thoughts on how to go about improving the meta-issues here, which I'm less confident and will flesh out in future posts.

A Child's Petrov Day Speech

15 James_Miller 28 September 2016 02:27AM

30 years ago, the Cold War was raging on. If you don’t know what that is, it was the period from 1947 to 1991 where both the U.S and Russia had large stockpiles of nuclear weapons and were threatening to use them on each other. The only thing that stopped them from doing so was the knowledge that the other side would have time to react. The U.S and Russia both had surveillance systems to know of the other country had a nuke in the air headed for them.

On this day, September 26, in 1983, a man named Stanislav Petrov was on duty in the Russian surveillance room when the computer notified him that satellites had detected five nuclear missile launches from the U.S. He was told to pass this information on to his superiors, who would then launch a counter-strike.


He refused to notify anyone of the incident, suspecting it was just an error in the computer system.


No nukes ever hit Russian soil. Later, it was found that the ‘nukes’ were just light bouncing off of clouds which confused the satellite. Petrov was right, and likely saved all of humanity by stopping the outbreak of nuclear war. However, almost no one has heard of him.

We celebrate men like George Washington and Abraham Lincoln who win wars. These were great men, but the greater men, the men like Petrov who stopped these wars from ever happening - no one has heard of these men.


Let it be known, that September 26 is Petrov Day, in honor of the acts of a great man who saved the world, and of who almost no one has heard the name of.

 

 

 

My 11-year-old son wrote and then read this speech to his six grade class.

Inefficient Games

14 capybaralet 23 August 2016 05:47PM

There are several well-known games in which the pareto optima and Nash equilibria are disjoint sets.
The most famous is probably the prisoner's dilemma.  Races to the bottom or tragedies of the commons typically have this feature as well.

I proposed calling these inefficient games.  More generally, games where the sets of pareto optima and Nash equilibria are distinct (but not disjoint), such as a stag hunt could be called potentially inefficient games.

It seems worthwhile to study (potentially) inefficient games as a class and see what can be discovered about them, but I don't know of any such work (pointers welcome!)

The Future of Humanity Institute is hiring!

13 crmflynn 18 August 2016 01:09PM

FHI is accepting applications for a two-year position as a full-time Research Project Manager. Responsibilities will include coordinating, monitoring, and developing FHI’s activities, seeking funding, organizing workshops and conferences, and effectively communicating FHI’s research. The Research Program Manager will also be expected to work in collaboration with Professor Nick Bostrom, and other researchers, to advance their research agendas, and will additionally be expected to produce reports for government, industry, and other relevant organizations. 

Applicants will be familiar with existing research and literature in the field and have excellent communication skills, including the ability to write for publication. He or she will have experience of independently managing a research project and of contributing to large policy-relevant reports. Previous professional experience working for non-profit organisations, experience with effectiv altruism, and a network in the relevant fields associated with existential risk may be an advantage, but are not essential. 

To apply please go to https://www.recruit.ox.ac.uk and enter vacancy #124775 (it is also possible to find the job by searching choosing “Philosophy Faculty” from the department options). The deadline is noon UK time on 29 August. To stay up to date on job opportunities at the Future of Humanity Institute, please sign up for updates on our vacancies newsletter at https://www.fhi.ox.ac.uk/vacancies/.

View more: Next