Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why CFAR's Mission?

35 AnnaSalamon 02 January 2016 11:23PM

Related to:


Briefly put, CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world.

I'd like to explain what this mission means to me, and why I think a high-quality effort of this sort is essential, possible, and urgent.

I used a Q&A format (with imaginary Q's) to keep things readable; I would also be very glad to Skype 1-on-1 if you'd like something about CFAR to make sense, as would Pete Michaud.  You can schedule a conversation automatically with me or Pete.

---

Q:  Why not focus exclusively on spreading altruism?  Or else on "raising awareness" for some particular known cause?

Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.

Q:  Even given the above -- why focus extra on sanity, or true beliefs?  Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have?  (Also, have you ever met a Less Wronger?  I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)

This is an interesting one, IMO.

Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.

For example:

continue reading »

Safety engineering, target selection, and alignment theory

17 So8res 31 December 2015 03:43PM

This post is the latest in a series introducing the basic ideas behind MIRI's research program. To contribute, or learn more about what we've been up to recently, see the MIRI fundraiser page. Our 2015 winter funding drive concludes tonight (31 Dec 15) at midnight.


 

Artificial intelligence capabilities research is aimed at making computer systems more intelligent — able to solve a wider range of problems more effectively and efficiently. We can distinguish this from research specifically aimed at making AI systems at various capability levels safer, or more "robust and beneficial." In this post, I distinguish three kinds of direct research that might be thought of as "AI safety" work: safety engineering, target selection, and alignment theory.

Imagine a world where humans somehow developed heavier-than-air flight before developing a firm understanding of calculus or celestial mechanics. In a world like that, what work would be needed in order to safely transport humans to the Moon?

In this case, we can say that the main task at hand is one of engineering a rocket and refining fuel such that the rocket, when launched, accelerates upwards and does not explode. The boundary of space can be compared to the boundary between narrowly intelligent and generally intelligent AI. Both boundaries are fuzzy, but have engineering importance: spacecraft and aircraft have different uses and face different constraints.

Paired with this task of developing rocket capabilities is a safety engineering task. Safety engineering is the art of ensuring that an engineered system provides acceptable levels of safety. When it comes to achieving a soft landing on the Moon, there are many different roles for safety engineering to play. One team of engineers might ensure that the materials used in constructing the rocket are capable of withstanding the stress of a rocket launch with significant margin for error. Another might design escape systems that ensure the humans in the rocket can survive even in the event of failure. Another might design life support systems capable of supporting the crew in dangerous environments.

A separate important task is target selection, i.e., picking where on the Moon to land. In the case of a Moon mission, targeting research might entail things like designing and constructing telescopes (if they didn't exist already) and identifying a landing zone on the Moon. Of course, only so much targeting can be done in advance, and the lunar landing vehicle may need to be designed so that it can alter the landing target at the last minute as new data comes in; this again would require feats of engineering.

Beyond the task of (safely) reaching escape velocity and figuring out where you want to go, there is one more crucial prerequisite for landing on the Moon. This is rocket alignment research, the technical work required to reach the correct final destination. We'll use this as an analogy to illustrate MIRI's research focus, the problem of artificial intelligence alignment.

continue reading »

Why CFAR? The view from 2015

44 PeteMichaud 23 December 2015 10:46PM

Follow-up to: 2013 and 2014.

In this post, we:

We are in the middle of our matching fundraiser; so if you’ve been considering donating to CFAR this year, now is an unusually good time.

continue reading »

FHI is hiring researchers!

13 Stuart_Armstrong 23 December 2015 10:46PM

The Future of Humanity Institute at the University of Oxford invites applications for four research positions. We seek outstanding applicants with backgrounds that could include computer science, mathematics, economics, technology policy, and/or philosophy.

continue reading »

MIRI's 2015 Winter Fundraiser!

28 So8res 09 December 2015 07:00PM

MIRI's Winter Fundraising Drive has begun! Our current progress, updated live:

 

Donate Now

 

Like our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. The drive will run until December 31st, and will help support MIRI's research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.

continue reading »

Why startup founders have mood swings (and why they may have uses)

46 AnnaSalamon 09 December 2015 06:59PM

(This post was collaboratively written together with Duncan Sabien.)

 

Startup founders stereotypically experience some pretty serious mood swings.  One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for.  Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt.  Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.

 

 

 

 

 

Well, sure, you might say.  Running a startup is stressful.  Stress comes with mood swings.  

 

But that’s not really an explanation—it’s like saying stuff falls when you let it go.  There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.

 

continue reading »

LessWrong 2.0

89 Vaniver 09 December 2015 06:59PM

Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!

You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.

But before we leap into action, let's review the problem.

continue reading »

Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity

18 peter_hurford 01 December 2015 01:56AM

This year's EA Survey is now ready to be shared! This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

Please share the survey with others who might be interested using this link rather than the one above: http://bit.ly/1OqsVWo

Bay Area Solstice 2015

19 MarieLa 17 November 2015 12:34AM

The winter solstice marks the darkest day of the year, a time to reflect on the past, present, and future. For several years and in many cities, Rationalists, Humanists, and Transhumanists have celebrated the solstice as a community, forming bonds to aid our work in the world.

Last year, more than one hundred people in the Bay Area came together to celebrate the Solstice.  This year, we will carry on the tradition. Join us for an evening of song and story in the candlelight as we follow the triumphs and hardships of humanity. 

The event itself is a community performance. There will be approximately two hours of songs and speeches, and a chance to eat and talk before and after. Death will be discussed. The themes are typically Humanist and Transhumanist, with a general audience that tends to be those who have found this site interesting, or care a lot about making our future better. There will be mild social pressure to sing along to songs.

 

When: December 12 at 7:00 PM - 9:00 PM

Where: Humanist Hall, 390 27th St, Oakland, CA 94612

Get tickets here. Bitcoin donation address: 1ARz9HYD45Midz9uRCA99YxDVnsuYAVPDk  

Sign up to bring food here

 

Feel free to message me if you'd like to talk about the direction the Solstice is taking, things you like, or things you didn't like. Also, please let me know if you'd like to volunteer.  

Future of Life Institute is hiring

16 Vika 17 November 2015 12:34AM

I am a co-founder of the Future of Life Institute based in Boston, and we are looking to fill two job openings that some LessWrongers might be interested in. We are a mostly volunteer-run organization working to reduce catastrophic and existential risks, and increase the chances of a positive future for humanity. Please consider applying and pass this posting along to anyone you think would be a good fit!

PROJECT COORDINATOR

Technology has given life the opportunity to flourish like never before - or to self-destruct. The Future of Life Institute is a rapidly growing non-profit organization striving for the former outcome. We are fortunate to be supported by an inspiring group of people, including Elon Musk, Jaan Tallinn and Stephen Hawking, and you may have heard of our recent efforts to keep artificial intelligence beneficial.

You are idealistic, hard-working and well-organized, and want to help our core team carry out a broad range of projects, from organizing events to coordinating media outreach. Living in the greater Boston area is a major advantage, but not an absolute requirement.

If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and a brief statement of why you want to work with us. The title of your email must be 'Project coordinator'.

NEWS WEBSITE EDITOR

There is currently huge public interest in the question of how upcoming technology (especially artificial intelligence) may transform our world, and what should be done to seize opportunities and reduce risks.

You are idealistic and ambitious, and want to lead our effort to transform our fledgling news site into the number one destination for anyone seeking up-to-date and in-depth information on this topic, and anybody eager to join what is emerging as one of the most important conversations of our time.

You love writing and have the know-how and drive needed to grow and promote a website. You are self-motivated and enjoy working independently rather than being closely mentored. You are passionate about this topic, and look forward to the opportunity to engage with our second-to-none global network of experts and use it to generate ideas and add value to the site. You look forward to developing and executing your vision for the website using the resources at your disposal, which include both access to experts and funds for commissioning articles, improving the website user interface, etc. You look forward to making use of these resources and making things happen rather than waiting for others to take the initiative.

If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and answers to these questions:

  • Briefly, what is your vision for our site? How would you improve it?
  • What other site(s) (please provide URLs) have attributes that you'd like to emulate?
  • How would you generate the required content?
  • How would you increase traffic to the site, and what do you view as realistic traffic goals for January 2016 and January 2017?
  • What budget do you need to succeed, not including your own salary?
  • What past experience do you have with writing and/or website management? Please include a selection of URLs that showcase your work.

The title of your application email must be 'Editor'. You can live anywhere in the world. A science background is a major advantage, but not a strict requirement.

View more: Next