Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

X risk update, Gliese 710 will pass thru Oort in 1.35 my

4 morganism 08 January 2017 07:58PM

I was tracking these runaway stars for a SF story i had in mind, but this is the closest one i have heard of yet, and the ArXiv paper describes one that also passed thru 2.5 mya.

 

Gliese 710 will pass the Sun even closer

Close approach parameters recalculated based on the first Gaia data release

http://www.aanda.org/articles/aa/abs/2016/11/aa29835-16/aa29835-16.html

 

Close encounters of the stellar kind

https://arxiv.org/abs/1412.3648

 

tl:dr article

http://www.businessinsider.com/star-hurting-towards-solar-system-2016-12\

"Gliese 710 is about half the size of our sun, and it is set to reach Earth in 1.35 million years, according to a paper published in the journal Astronomy & Astrophysics in November.

And when it arrives, the star could end up a mere 77 light-days away from Earth — one light-day being the equivalent of how far light travels in one day, which is about 26 billion kilometers, the researchers worked out.

As far as we know, Gliese 710 isn't set to collide directly with Earth, but it wil be passing through the Oort Cloud, a shell of trillions of icy objects at the furthest reaches of our solar system. "

 

Seems like a great opportunity to send out some interstellar probes. The star will be trailing lots of ISM, free gas that would help bring a ramjet up to speed, and track till you could curve towards another destination. Likewise, a solar sail probe launched out in front of it by laser could "hitchhike" , and get some deep space ISM , and EM measurements.

Can we think of some other opportunities that this might present ? If we are past the filter by then, then we will already prob have samples of the Oort objects, but looks like they will be delivering then...

The Charity Impact Calculator

6 Gleb_Tsipursky 26 January 2016 05:01AM

This will be of interest mainly to EA-friendly LWs, and is cross-posted on the EA Forum, The Life You Can Save, and Intentional Insights

 

The Life You Can Save has an excellent tool to help people easily visualize and quantify the impact of their giving: the Impact Calculator. It enables people to put in any amount of money they want, then click on a charity, and see how much of an impact their money can have. It's a really easy way to promote effective giving to non-EAs, but even EAs who didn't see it before can benefit. I certainly did, when I first played around with it. So I wrote a blog post, copy-pasted below, for The Life You Can Save and for Intentional Insights, to help people learn about the Impact Calculator. If you like the blog, please share this link to The Life You Can Save blog, as opposed to this post. Any feedback on the blog post itself is welcomed!

 

__________________________________________________________________________

 

How a Calculator Helped Me Multiply My Giving

It feels great to see hope light up in the eyes of a beggar in the street as you stop to look at them when others pass them by without a glance. Their faces widen in a smile as you reach into your pocket and take out your wallet. "Thank you so much" is such a heartwarming phrase to hear from them as you pull out five bucks and put the money in the hat in front of them. You walk away with your heart beaming as you imagine them getting a nice warm meal at McDonalds due to your generosity.

Yet with the help of a calculator, I learned how to multiply that positive experience manifold! Imagine that when you give five dollars, you don’t give just to one person, but to seven people. When you reach into your pocket, you see seven smiles. When you put the money in the hat, you hear seven people say “Thank you so much.”

The Life You Can Save has an Impact Calculator that helps you calculate the impact of your giving. You can put in any amount of money you want, then click on a charity of your choice, and see how much of an impact your money can have.

When I learned about this calculator, I decided to check out how far $5 can take me. I went through various charities listed there and saw the positive difference that my money can make.

I was especially struck by one charity, GiveDirectly is a nonprofit that enables you to give directly to people in East Africa. When I put in $5, I saw that what GiveDirectly does is transfers that money directly to poor people who live on an average of $.65 per day. You certainly can’t buy a McDonald’s meal for that, but $.65 goes far in East Africa.

That really struck me. I realized I can get a really high benefit from giving directly to people in the developing world, much more than I would from giving to one person in the street here in the US. I don’t see those seven people in front of me and thus don’t pay attention to the impact I can have on them, a thinking error called attentional bias. Yet if I keep in mind this thinking error, I can solve what is known as the “drowning child problem” in charitable giving, namely not intuitively valuing the children who are drowning out of my sight. If I keep in my mind that there are poor people in the developing world, just like the poor person I see on the street in front of me, I can remember that my generosity can make a very high impact, much more impact per dollar than in the US, in developing countries through my direct giving.

GiveDirectly bridges that gap between me and the poor people across the globe. This organization locates poor people who can benefit most from cash transfers, enrolls them in its program, and then provides each household with about a thousand dollars to spend as it wishes. The large size of this cash transfer results in a much bigger impact than a small donation. Moreover, since the cash transfer is unconditional, the poor person can have true dignity and spend it on whatever most benefits them.

Helida, for example, used the cash transfer she got to build a new house. You wouldn’t intuitively think that was most useful thing for her to do, would you? But this is what she needed most. She was happy that as a result of the cash transfer “I have a metal roof over my head and I can safely store my farm produce without worries.” She is now much more empowered to take care of herself and her large family.

What a wonderful outcome of GiveDirectly’s work! Can you imagine building a new house in the United States on a thousand dollars? Well, this is why your direct donations go a lot further in East Africa.

With GiveDirectly, you can be much more confident about the outcome of your generosity. I know that when I give to a homeless person, a part of me always wonders whether he will spend the money on a bottle of cheap vodka. This is why I really appreciate that GiveDirectly keeps in touch and follows up with the people enrolled in its programs. They are scrupulous about sharing the consequences of their giving, so you know what you are getting by your generous gifts.

GiveDirectly is back by rigorous evidence. They conduct multiple randomized control studies of their impact, a gold standard of evidence. The research shows that cash transfer recipients have much better health and lives as a result of the transfer, much more than most types of anti-poverty interventions. Its evidence-based approach is why GiveDirectly is highly endorsed by well-respected charity evaluators such as GiveWell and The Life You Can Save, which are part of the Effective Altruist movement that strives to figure out the best research-informed means to do the most good per dollar.

So next time you pass someone begging on the street, think about GiveDirectly, since you can get seven times as much impact, for your emotional self and for the world as a whole. What I do myself is each time I choose to give to a homeless person, I set aside the same amount of money to donate through GiveDirectly. That way, I get to see the smile and hear the “thank you” in person, and also know that I can make a much more impactful gift as well.

Check out the Impact Calculator for yourself to see the kind of charities available there and learn about the impact you can make. Perhaps direct giving is not to your taste, but there are over a dozen other options for you to choose from. Whatever you choose, aim to multiply your generosity to achieve your giving goals!

Compilation of currently existing project ideas to significantly impact the world

15 diegocaleiro 08 March 2015 04:59AM

One of the problems the LW, EA, CFAR X-risk community has been faced with recently discussed on Slate Star Codex is the absorption of people interested in researching, volunteering, helping, participating in the community. A problem worth subdividing into how to get new people into the social community, which is addressed on the link above, and separate problem, absorbing their skills, ability, and willingness to volunteer, to which this post is dedicated: 

What should specific person Smith do to help in the project of preventing X-risk, improving the world, saving lives? We assume here Smith will not be a donor - in which case the response would be "donate" -  joined the community not long ago and has a skill set X.

Soon this problem will become worse due to influx of more people brought in by the soon to be published books by MacAskill, Yudkowsky and Singer coming out. There will be more people wanting to do something, and able to do some sorts of projects, but who are not being allocated any specific project that matches their skill set and values.

Now is a good time to solve it. I was talking about this problem today with Stephen Frey and we considered it would be a good idea to have a list of specific technical or research projects that can be broken down into smaller chunks for people to pick up and do. A Getting Things Done list for researchers and technology designers. Preferably those would be tractable projects that can be done in fewer than three months. There are some lists of open problems in AI and Superintelligence control, but not for many X-risks or other problems that some of the community frequently considers important.

So I decided to make a compilation of the questions and problems we already have listed here, and then ask someone (Oliver Habryka or a volunteer in the comment section here) to transform the compiled set into a standardized format.

A tentative category list

Area: X-risk, AI, Anti-aging, Cryonics, Rationality, IA, Self-Improvement, Differential Technological Development, Strategy, Civilizational Inadequacy, etc... describes what you have to value/disvalue in order for this project to match your values.

Project: description of which actions need to be taken in 3 month period for this project to be considered complete.

Context: if part of a larger project, which is it, and how will it connect to other parts. Also justification for that project.

Notes: any relevant constraints that may play a role, time-sensitivity, costs, number of people, location, etc...

For example at Luke's list of Superintelligence research questions, the first one:

  1. How strongly does IQ predict rationality, metacognition, and philosophical sophistication, especially in the far right tail of the IQ distribution? Relevant to the interaction of intelligence amplification and FAI chances. See the project guide here.

Would be rendered as

Area: FAI ; Project:  Read Rationality and the Reflective Mind, by Keith Stanovich, to become familiar with the model of algorithmic and reflective minds. For this project, investigating metacognition means investigating the reflective mind. Find ways to test Stanovich’s predictions and answer the questions in the previous section. Design the study to give participants tests which high a IQ should help with and tests which a high IQ should not help with. This step will involve searching through Rationality and the Reflective Mind, and then directly contacting Stanovich to ask which tests he has not yet conducted. Context: this is the first of two part sub-study investigating IQ and metacognition, and needs being followed by conducting a new study investigating the correlation. These parts are complimentary with the study of IQ and philosophical success, and are relevant to assess the impact that intelligence augmentation will have in our likelihood of generating Friendly Artificial Intelligence. Notes: needs to be conducted by someone with a researcher affiliation and capacity to conduct a study on human subjects later, six month commitment, some science writing experience.

Edit: Here is a file where to start compiling projects - thanks Stephen!

This is the idea.To gather a comprehensive list of research or technical questions for the areas above, transform them into projects that can be more easily parsed and assigned than their currently scattered counterparts and make the list available to those who want to work on them. This post is the first step in collection, so if there are lists anywhere of projects, or research questions that may be relevant for any of the areas cited above, please post a link to these at the comments - special kudos if you already post it in the format above. Also let me know if you would like to volunteer in this. If you remember any question or specific project but don't see it in any list or on the comments, post it. When we create a standardized list for people to look through it will be separated by area, so people can visualize only projects related to what they value.

Compilation:

Lists of ideas and projects:

Superintelligence Strategic List - Muelhauser

Mechanisms of Aging - Ben Best

Cryonics Strategy Space - Froolow

Ideas and projects:

Go to Mars - Musk

Make it easy for people within the community to move to US, UK.

Preserve Brains

Find moral enhancers that improve global cooperation as well as intra-group cooperation

Open Borders

...

...

 

How to measure optimisation power

8 Stuart_Armstrong 25 May 2012 11:07AM

As every school child knows, an advanced AI can be seen as an optimisation process - something that hits a very narrow target in the space of possibilities. The Less Wrong wiki entry proposes some measure of optimisation power:

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. We take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log2(1,000,000) = 19.9 bits of optimization.

This doesn't seem a fully rigorous definition - what exactly is meant by a million random tries? Also, it measures how hard it would be to come up with that solution, but not how good that solution is. An AI that comes up with a solution that is ten thousand bits more complicated to find, but that is only a tiny bit better than the human solution, is not one to fear.

Other potential measurements could be taking any of the metrics I suggested in the reduced impact post, but used in reverse: to measure large deviations from the status quo, not small ones.

Anyway, before I reinvent the coloured wheel, I just wanted to check whether there was a fully defined agreed upon measure of optimisation power.