Filter This month

Linkposts now live!

26 Vaniver 28 September 2016 03:13PM

 

You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.

Some general norms, subject to change:

 

  1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
  2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
  3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
  4. It's not okay to post duplicates.

As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.

(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.)

MIRI's 2016 Fundraiser

18 So8res 25 September 2016 04:55PM

Our 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):  

 


Donate Now

Employer matching and pledges to give later this year also count towards the total. Click here to learn more.


 

MIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world. 2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:

We also published new results in decision theory and logical uncertainty, including “Parametric bounded Löb’s theorem and robust cooperation of bounded agents” and “A formal solution to the grain of truth problem.” For a survey of our research progress and other updates from last year, see our 2015 review. In the last three weeks, there have been three more major developments:

  • We released a new paper, “Logical induction,” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.
  • The Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.
  • The Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.

Things have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.  

The strategic landscape

Humans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed -- our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics. Separate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups. Our long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:

  • In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.
  • In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.
  • In the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.
  • In the very long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity -- take our time to dot every i and cross every t before we risk "locking in" design choices.

The above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in. For more on our research focus and methodology, see our research page and MIRI’s Approach.  

Our organizational plans

We currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode. Our budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:


Basic target - $750,000. We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.


Growth target - $1,000,000. This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.


Stretch target - $1,250,000. At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5


If we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires. As always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.  

Donate Now

or

Pledge to Give

 

1 This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative. (back)

2 We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year. (back)

3 We're imagining continuing to run one fundraiser per year in future years, possibly in September. (back)

4 Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants. For comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser. (back)

5 At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details. (back)

Astrobiology III: Why Earth?

17 CellBioGuy 04 October 2016 09:59PM

After many tribulations, my astrobiology bloggery is back up and running using Wordpress rather than Blogger because Blogger is completely unusable these days.  I've taken the opportunity of the move to make better graphs for my old posts. 

"The Solar System: Why Earth?"

https://thegreatatuin.wordpress.com/2016/10/03/the-solar-system-why-earth/

Here, I try to look at our own solar system and what the presence of only ONE known biosphere, here on Earth, tells us about life and perhaps more importantly what it does not.  In particular, I explore what aspects of Earth make it special and I make the distinction between a big biosphere here on Earth that has utterly rebuilt the geochemistry and a smaller biosphere living off smaller amounts of energy that we probably would never notice elsewhere in our own solar system given the evidence at hand. 

Commentary appreciated.

 

 

Previous works:

Space and Time, Part I

https://thegreatatuin.wordpress.com/2016/09/25/space-and-time-part-i

Space and Time, Part II

https://thegreatatuin.wordpress.com/2016/09/25/space-and-time-part-ii

A Child's Petrov Day Speech

15 James_Miller 28 September 2016 02:27AM

30 years ago, the Cold War was raging on. If you don’t know what that is, it was the period from 1947 to 1991 where both the U.S and Russia had large stockpiles of nuclear weapons and were threatening to use them on each other. The only thing that stopped them from doing so was the knowledge that the other side would have time to react. The U.S and Russia both had surveillance systems to know of the other country had a nuke in the air headed for them.

On this day, September 26, in 1983, a man named Stanislav Petrov was on duty in the Russian surveillance room when the computer notified him that satellites had detected five nuclear missile launches from the U.S. He was told to pass this information on to his superiors, who would then launch a counter-strike.


He refused to notify anyone of the incident, suspecting it was just an error in the computer system.


No nukes ever hit Russian soil. Later, it was found that the ‘nukes’ were just light bouncing off of clouds which confused the satellite. Petrov was right, and likely saved all of humanity by stopping the outbreak of nuclear war. However, almost no one has heard of him.

We celebrate men like George Washington and Abraham Lincoln who win wars. These were great men, but the greater men, the men like Petrov who stopped these wars from ever happening - no one has heard of these men.


Let it be known, that September 26 is Petrov Day, in honor of the acts of a great man who saved the world, and of who almost no one has heard the name of.

 

 

 

My 11-year-old son wrote and then read this speech to his six grade class.

MIRI AMA plus updates

10 RobbBB 11 October 2016 11:52PM

MIRI is running an AMA on the Effective Altruism Forum tomorrow (Wednesday, Oct. 11): Ask MIRI Anything. Questions are welcome in the interim!

Nate also recently posted a more detailed version of our 2016 fundraising pitch to the EA Forum. One of the additions is about our first funding target:

We feel reasonably good about our chance of hitting target 1, but it isn't a sure thing; we'll probably need to see support from new donors in order to hit our target, to offset the fact that a few of our regular donors are giving less than usual this year.

The Why MIRI's Approach? section also touches on new topics that we haven't talked about in much detail in the past, but plan to write up some blog posts about in the future. In particular:

Loosely speaking, we can imagine the space of all smarter-than-human AI systems as an extremely wide and heterogeneous space, in which "alignable AI designs" is a small and narrow target (and "aligned AI designs" smaller and narrower still). I think that the most important thing a marginal alignment researcher can do today is help ensure that the first generally intelligent systems humans design are in the “alignable” region. I think that this is unlikely to happen unless researchers have a fairly principled understanding of how the systems they're developing reason, and how that reasoning connects to the intended objectives.

Most of our work is therefore aimed at seeding the field with ideas that may inspire more AI research in the vicinity of (what we expect to be) alignable AI designs. When the first general reasoning machines are developed, we want the developers to be sampling from a space of designs and techniques that are more understandable and reliable than what’s possible in AI today.

In other news, we've uploaded a new intro talk on our most recent result, "Logical Induction," that goes into more of the technical details than our previous talk.

See also Shtetl-Optimized and n-Category Café for recent discussions of the paper.

[Link] Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"

10 Jacobian 28 September 2016 04:43PM

[Recommendation] Steven Universe & cryonics

8 tadrinth 11 October 2016 04:21PM

I've been watching Steven Universe with my fiancee (a children's cartoon on Cartoon Network by Rebecca Sugar), and it wasn't until I got to Season 3 that I realized there's been a cryonics metaphor running in the background since the very first episode. If you want to introduce your kids to the idea of cryonics, this series seems like a spectacularly good way to do it.

If you don't want any spoilers, just go watch it, then come back.

Otherwise, here's the metaphor I'm seeing, and why it's great:

  • In the very first episode, we find out that the main characters are a group called the Crystal Gems, who fight 'gem monsters'. When they defeat a monster, a gem is left behind, which they lock in a bubble-forcefield and store in their headquarters.

  • One of the Crystal Gems is injured in a training accident, and we find out that their bodies are just projections; each Crystal Gem has a gem located somewhere on their body, which contains their minds. So long as their gem isn't damaged, they can project a new body after some time to recover. So we already have the insight that minds and bodies are separate.

  • This is driven home by a second episode where one of the Crystal Gems has their crystal cracked; this is actually dangerous to their mind, not just body, and is treated as a dire emergency instead of merely an inconvenience.

  • Then we eventually find out that the gem monsters are actually corrupted members of the same species as the Crystal Gems. They are 'bubbled' and stored in the temple in hopes of eventually restoring them to sanity and their previous forms.

  • An attempt is made to cure one of the monsters, which doesn't fully succeed, but at least restores them to sanity. This allows them to remain unbubbled and to be reunited with their old comrades (who are also corrupted). This was the episode where I finally made the connection to cryonics.

  • The Crystal Gems are also revealed to be over 5000 years old, and effectively immortal. They don't make a big deal out of this; for them, this is totally normal.

  • This also implies that they've made no progress in curing the gem monsters in 5000 years, but that doesn't stop them from preserving them anyway.

  • Finally, a secret weapon is revealed which is capable of directly shattering gems (thus killing the target permanently), but the use of it is rejected as unethical.

So, all in all, you have a series where when someone is hurt or sick in a way that you can't help, you preserve their mind in a safe way until you can figure out a way to help them. Even your worst enemy deserves no less.

 

Also, Steven Universe has an entire episode devoted to mindfulness meditation.  

[Link] Putanumonit - Discarding empathy to save the world

7 Jacobian 06 October 2016 07:03AM

CrowdAnki comprehensive JSON representation of Anki Decks to facilitate collaboration

7 harcisis 18 September 2016 10:59AM

Hi everyone :). I like Anki, find it quite useful and use it daily. There is one thing that constantly annoyed me about it, though - the state of shared decks and of infrastructure around them.

There is a lot of topics that are of common interest for a large number of people, and there is usually some shared decks available for these topics. The problem with them is that as they are usually decks created by individuals for their own purposes and uploaded to ankiweb. So they are often incomplete/of mediocre quality/etc and they are rarely supported or updated.

And there is no way to collaborate on the creation or improvement of such decks, as there is no infrastructure for it and the format of the decks won't allow you to use common collaboration infrastructure (e.g. Github). So I've been recently working on a plugin for Anki that will allow you to make a full-feature Import/Export to/from JSON. What I mean by full-feature is that it exports not just cards converted to JSON, but Notes, Decks, Models, Media etc. So you can do export, modify result, or merge changes from someone else and on Import, those changes would be reflected on your existing cards/decks and no information/metadata/etc would be lost.

The point is to provide a format that will enable collaboration using mentioned common collaboration infrastructure. So using it you can easily work with multiple people to create a deck, collaborating for example, via Github, and then deck could be updated and improved by contributions from other people.

I'm looking for early adopters and for feedback :).

The ankiweb page for plugin (that's where you can get the plugin): https://ankiweb.net/shared/info/1788670778

Github: https://github.com/Stvad/CrowdAnki

Some of my decks, on a Github (btw by using plugin, you can get decks directly from Github):

Git deck: https://github.com/Stvad/Software_Engineering__git

Regular expressions deck: https://github.com/Stvad/Software_Engineering__Regular_Expressions

Deck based on article Twenty rules of formulating knowledge by Piotr Wozniak:

https://github.com/Stvad/Learning__How-to-Formulate-Knowledge

You're welcome to use this decks and contribute back the improvements.

[Link] 80% of data in Chinese clinical trials have been fabricated

6 DanArmak 02 October 2016 07:38AM

View more: Next