Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why startup founders have mood swings (and why they may have uses)

49 AnnaSalamon 09 December 2015 06:59PM

(This post was collaboratively written together with Duncan Sabien.)

 

Startup founders stereotypically experience some pretty serious mood swings.  One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for.  Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt.  Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.

 

 

 

 

 

Well, sure, you might say.  Running a startup is stressful.  Stress comes with mood swings.  

 

But that’s not really an explanation—it’s like saying stuff falls when you let it go.  There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.

 

continue reading »

LessWrong 2.0

90 Vaniver 09 December 2015 06:59PM

Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!

You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.

But before we leap into action, let's review the problem.

continue reading »

Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity

18 peter_hurford 01 December 2015 01:56AM

This year's EA Survey is now ready to be shared! This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

Please share the survey with others who might be interested using this link rather than the one above: http://bit.ly/1OqsVWo

Bay Area Solstice 2015

19 MarieLa 17 November 2015 12:34AM

The winter solstice marks the darkest day of the year, a time to reflect on the past, present, and future. For several years and in many cities, Rationalists, Humanists, and Transhumanists have celebrated the solstice as a community, forming bonds to aid our work in the world.

Last year, more than one hundred people in the Bay Area came together to celebrate the Solstice.  This year, we will carry on the tradition. Join us for an evening of song and story in the candlelight as we follow the triumphs and hardships of humanity. 

The event itself is a community performance. There will be approximately two hours of songs and speeches, and a chance to eat and talk before and after. Death will be discussed. The themes are typically Humanist and Transhumanist, with a general audience that tends to be those who have found this site interesting, or care a lot about making our future better. There will be mild social pressure to sing along to songs.

 

When: December 12 at 7:00 PM - 9:00 PM

Where: Humanist Hall, 390 27th St, Oakland, CA 94612

Get tickets here. Bitcoin donation address: 1ARz9HYD45Midz9uRCA99YxDVnsuYAVPDk  

Sign up to bring food here

 

Feel free to message me if you'd like to talk about the direction the Solstice is taking, things you like, or things you didn't like. Also, please let me know if you'd like to volunteer.  

Future of Life Institute is hiring

16 Vika 17 November 2015 12:34AM

I am a co-founder of the Future of Life Institute based in Boston, and we are looking to fill two job openings that some LessWrongers might be interested in. We are a mostly volunteer-run organization working to reduce catastrophic and existential risks, and increase the chances of a positive future for humanity. Please consider applying and pass this posting along to anyone you think would be a good fit!

PROJECT COORDINATOR

Technology has given life the opportunity to flourish like never before - or to self-destruct. The Future of Life Institute is a rapidly growing non-profit organization striving for the former outcome. We are fortunate to be supported by an inspiring group of people, including Elon Musk, Jaan Tallinn and Stephen Hawking, and you may have heard of our recent efforts to keep artificial intelligence beneficial.

You are idealistic, hard-working and well-organized, and want to help our core team carry out a broad range of projects, from organizing events to coordinating media outreach. Living in the greater Boston area is a major advantage, but not an absolute requirement.

If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and a brief statement of why you want to work with us. The title of your email must be 'Project coordinator'.

NEWS WEBSITE EDITOR

There is currently huge public interest in the question of how upcoming technology (especially artificial intelligence) may transform our world, and what should be done to seize opportunities and reduce risks.

You are idealistic and ambitious, and want to lead our effort to transform our fledgling news site into the number one destination for anyone seeking up-to-date and in-depth information on this topic, and anybody eager to join what is emerging as one of the most important conversations of our time.

You love writing and have the know-how and drive needed to grow and promote a website. You are self-motivated and enjoy working independently rather than being closely mentored. You are passionate about this topic, and look forward to the opportunity to engage with our second-to-none global network of experts and use it to generate ideas and add value to the site. You look forward to developing and executing your vision for the website using the resources at your disposal, which include both access to experts and funds for commissioning articles, improving the website user interface, etc. You look forward to making use of these resources and making things happen rather than waiting for others to take the initiative.

If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and answers to these questions:

  • Briefly, what is your vision for our site? How would you improve it?
  • What other site(s) (please provide URLs) have attributes that you'd like to emulate?
  • How would you generate the required content?
  • How would you increase traffic to the site, and what do you view as realistic traffic goals for January 2016 and January 2017?
  • What budget do you need to succeed, not including your own salary?
  • What past experience do you have with writing and/or website management? Please include a selection of URLs that showcase your work.

The title of your application email must be 'Editor'. You can live anywhere in the world. A science background is a major advantage, but not a strict requirement.

MIRI's 2015 Summer Fundraiser!

42 So8res 19 August 2015 12:27AM

Our summer fundraising drive is now finished. We raised a grand total of $631,957 from 263 donors. This is an incredible sum, making this the biggest fundraiser we’ve ever run.

We've already been hard at work growing our research team and spinning up new projects, and I’m excited to see what our research team can do this year. Thank you to all our supporters for making our summer fundraising drive so successful!


It's safe to say that this past year exceeded a lot of people's expectations.

Twelve months ago, Nick Bostrom's Superintelligence had just come out. Questions about the long-term risks and benefits of smarter-than-human AI systems were nearly invisible in mainstream discussions of AI's social impact.

Twelve months later, we live in a world where Bill Gates is confused by why so many researchers aren't using Superintelligence as a guide to the questions we should be asking about AI's future as a field.

Following a conference in Puerto Rico that brought together the leading organizations studying long-term AI risk (MIRI, FHI, CSER) and top AI researchers in academia (including Stuart Russell, Tom Mitchell, Bart Selman, and the Presidents of AAAI and IJCAI) and industry (including representatives from Google DeepMind and Vicarious), we've seen Elon Musk donate $10M to a grants program aimed at jump-starting the field of long-term AI safety research; we've seen the top AI and machine learning conferences (AAAI, IJCAI, and NIPS) announce their first-ever workshops or discussions on AI safety and ethics; and we've seen a panel discussion on superintelligence at ITIF, the leading U.S. science and technology think tank. (I presented a paper at the AAAI workshop, I spoke on the ITIF panel, and I'll be at NIPS.)

As researchers begin investigating this area in earnest, MIRI is in an excellent position, with a developed research agenda already in hand. If we can scale up as an organization then we have a unique chance to shape the research priorities and methods of this new paradigm in AI, and direct this momentum in useful directions.

This is a big opportunity. MIRI is already growing and scaling its research activities, but the speed at which we scale in the coming months and years depends heavily on our available funds.

For that reason, MIRI is starting a six-week fundraiser aimed at increasing our rate of growth.

 

— Live Progress Bar 

Donate Now

 

This time around, rather than running a matching fundraiser with a single fixed donation target, we'll be letting you help choose MIRI's course based on the details of our funding situation and how we would make use of marginal dollars.

In particular, our plans can scale up in very different ways depending on which of these funding targets we are able to hit:

continue reading »

Less Wrong EBook Creator

45 ScottL 13 August 2015 09:17PM

I read a lot on my kindle and I noticed that some of the sequences aren’t available in book form. Also, the ones that are mostly only have the posts. I personally want them to also include some of the high ranking comments and summaries. So, that is why I wrote this tool to automatically create books from a set of posts. It creates the book based on the information you give it in an excel file. The excel file contains:

Post information

  • Book name
  • Sequence name
  • Title
  • Link
  • Summary description

Sequence information

  • Name
  • Summary

Book information

  • Name
  • Summary

The only compulsory component is the link to the post.

I have used the tool to create books for Living LuminouslyNo-Nonsense MetaethicsRationality: From AI to ZombiesBenito's Guide and more. You can see them in the examples folder in this github link. The tool just creates epub books you can use calibre or a similar tool to convert it to another format.  

continue reading »

MIRI's Approach

34 So8res 30 July 2015 08:03PM

MIRI's summer fundraiser is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. This post is one I've been wanting to write for a long time; I hope you all enjoy it. For earlier posts in the series, see the bottom of the above link.


MIRI’s mission is “to ensure that the creation of smarter-than-human artificial intelligence has a positive impact.” How can we ensure any such thing? It’s a daunting task, especially given that we don’t have any smarter-than-human machines to work with at the moment. In a previous post to the MIRI Blog I discussed four background claims that motivate our mission; in this post I will describe our approach to addressing the challenge.

This challenge is sizeable, and we can only tackle a portion of the problem. For this reason, we specialize. Our two biggest specializing assumptions are as follows:

1. We focus on scenarios where smarter-than-human machine intelligence is first created in de novo software systems (as opposed to, say, brain emulations). This is in part because it seems difficult to get all the way to brain emulation before someone reverse-engineers the algorithms used by the brain and uses them in a software system, and in part because we expect that any highly reliable AI system will need to have at least some components built from the ground up for safety and transparency. Nevertheless, it is quite plausible that early superintelligent systems will not be human-designed software, and I strongly endorse research programs that focus on reducing risks along the other pathways.

2. We specialize almost entirely in technical research. We select our researchers for their proficiency in mathematics and computer science, rather than forecasting expertise or political acumen. I stress that this is only one part of the puzzle: figuring out how to build the right system is useless if the right system does not in fact get built, and ensuring AI has a positive impact is not simply a technical problem. It is also a global coordination problem, in the face of short-term incentives to cut corners. Addressing these non-technical challenges is an important task that we do not focus on.

In short, MIRI does technical research to ensure that de novo AI software systems will have a positive impact. We do not further discriminate between different types of AI software systems, nor do we make strong claims about exactly how quickly we expect AI systems to attain superintelligence. Rather, our current approach is to select open problems using the following question:

What would we still be unable to solve, even if the challenge were far simpler?

For example, we might study AI alignment problems that we could not solve even if we had lots of computing power and very simple goals.

We then filter on problems that are (1) tractable, in the sense that we can do productive mathematical research on them today; (2) uncrowded, in the sense that the problems are not likely to be addressed during normal capabilities research; and (3) critical, in the sense that they could not be safely delegated to a machine unless we had first solved them ourselves.1

These three filters are usually uncontroversial. The controversial claim here is that the above question — “what would we be unable to solve, even if the challenge were simpler?” — is a generator of open technical problems for which solutions will help us design safer and more reliable AI software in the future, regardless of their architecture. The rest of this post is dedicated to justifying this claim, and describing the reasoning behind it.

continue reading »

Why you should attend EA Global and (some) other conferences

19 Habryka 16 July 2015 04:50AM

Many of you know about Effective Altruism and the associated community. It has a very significant overlap with LessWrong, and has been significantly influenced by the culture and ambitions of the community here.

One of the most important things happening in EA over the next few months is going to be EA Global, the so far biggest EA and Rationality community event to date, happening throughout the month of August in three different locations: OxfordMelbourne and San Francisco (which is unfortunately already filled, despite us choosing the largest venue that Google had to offer).

The purpose of this post is to make a case for why it is a good idea for people to attend the event, and to serve as an information hub for information that might be more relevant to the LessWrong community (as well an additional place to ask questions). I am one of the main organizers and very happy to answer any questions that you have. 

Is it a good idea to attend EA Global?

This is a difficult question, that obviously will not have a unique answer, but from the best of what I can tell, and for the majority of people reading this post, the answer seems to be "yes". The EA community has been quite successful at shaping the world to the better, and at building an epistemic community that seems to be effective at changing its mind and updating on evidence.

But there have been other people arguing in favor of supporting the EA movement, and I don't want to repeat everything that they said. Instead I want to focus on a more specific argument: "Given that I belief that EA is overall a promising movement, should I attend EA Global if I want to improve the world (according to my preferences)?"

The key question here is: Does attending the conference help the EA Movement succeed?

How attending EA Global helps the EA Movement succeed

It seems that the success of organizations is highly dependent on the interconnectedness of its members. In general a rule seems to hold: The better connected the social graph of your organization is, the more effective does it work.

In particular, any significant divide in an organization, any clustering of different groups that do not communicate much with each other, seems to significantly reduce the output the organization produces. I wish we had better studies on this, and that I could link to more sources for this, but everything I've found so far points in this direction. The fact that HR departments are willing to spend extremely large sums of money to encourage the employees of organizations to interact socially with each other, is definitely evidence for this being a good rule to follow (though far from conclusive). 

What holds for most organizations should also hold for EA. If this is true, then the success of the EA Movement is significantly dependent on the interconnectedness of its members, both in the volume of its output and the quality of its output.

But EA is not a corporation, and EA does not share a large office together. If you would graph out the social graph of EA, it would very much look clustered. The Bay Area cluster, the Oxford cluster, the Rationality cluster, the East Coast and the West Coast cluster, many small clusters all over Europe with meetups and small social groups in different countries that have never talked to each other. EA is splintered into many groups, and if EA would be a company, the HR department would be very justified in spending a very significant chunk of resources at connecting those clusters as much as possible. 

There are not many opportunities for us to increase the density of the EA social graph. There are other minor conferences, and online interactions do some part of the job, but the past EA summits where the main events at which people from different clusters of EA met each other for the first time. There they built lasting social connections, and actually caused these separate clusters in EA to be connected. This had a massive positive effect on the output of EA. 

Examples: 

 

  • Ben Kuhn put me into contact with Ajeya Cotra, resulting in the two of us running a whole undergraduate class on Effective Altruism, that included Giving Games to various EA charities that was funded with over $10.000. (You can find documentation of the class here).
  • The last EA summit resulted in both Tyler Alterman and Kerry Vaughan being hired by CEA and now being full time employees, who are significantly involved in helping CEA set up a branch in the US.
  • The summit and retreat last year caused significant collaboration between CFAR, Leverage, CEA and FHI, resulting in multiple situations of these organizations helping each other in coordinating their fundraising attempts, hiring processes and navigating logistical difficulties.   

 

This is going to be even more true this year. If we want EA to succeed and continue shaping the world towards the good, we want to have as many people come to the EA Global events as possible, and ideally from as many separate groups as possible. This means that you, especially if you feel somewhat disconnected from EA, seriously want to consider coming. I estimate the benefit of this to be much bigger than the cost of a plane ticket and the entrance ticket (~$500). If you do find yourself significantly constrained by financial resources, consider applying for financial aid, and we will very likely be able to arrange something for you. By coming, you provide a service to the EA community at large. 

How do I attend EA Global? 

As I said above, we are organizing three different events in three different locations: Oxford, Melbourne and San Francisco. We are particularly lacking representation from many different groups in mainland Europe, and it would be great if they could make it to Oxford. Oxford also has the most open spots and is going to be much bigger than the Melbourne event (300 vs. 100).  

If you want to apply for Oxford go to: eaglobal.org/oxford

If you want to apply for Melbourne go to: eaglobal.org/melbourne

If you require financial aid, you will be able to put in a request after we've sent you an invitation. 

Taking the reins at MIRI

62 So8res 03 June 2015 11:52PM

Hi all. In a few hours I'll be taking over as executive director at MIRI. The LessWrong community has played a key role in MIRI's history, and I hope to retain and build your support as (with more and more people joining the global conversation about long-term AI risks & benefits) MIRI moves towards the mainstream.

Below I've cross-posted my introductory post on the MIRI blog, which went live a few hours ago. The short version is: there are very exciting times ahead, and I'm honored to be here. Many of you already know me in person or through my blog posts, but for those of you who want to get to know me better, I'll be running an AMA on the effective altruism forum at 3PM Pacific on Thursday June 11th.

I extend to all of you my thanks and appreciation for the support that so many members of this community have given to MIRI throughout the years.

continue reading »

View more: Prev | Next