The aim of the game is simple. try to guess how correlated the two variables in a scatter plot are. The closer your guess is to the true correlation, the better.
I was in the programming channel of the lesswrong slack this morning (it's a group chat web thing, all are welcome to ask for an invite if you'd like to chat with rationalists in a place that is not the archaic, transient mess that is IRC. (though irc.freenode.net/#lesswrong is not so terrible a place to hang out either, if you're into that))
, and a member expressed difficulty maintaining their interest in programming as a means to the end of earning to give. I've heard it said more than once that you can't teach passion, but I'd always taken that as the empty sputtering of those who simply do not know what passion is or what inspires it, so I decided, since we two have overlapping aesthetics and aspirations, that I would try to articulate my own passion for programming. Maybe it would transfer.
Here's what I wrote, more or less
...So, the problem that most philosophers in academia trip over, get impaled on, and worship for the rest of their careers, is that they're using great clumbering conceptual frameworks that they do not and cannot ever understand, that is, natural language and common-sense reasoning, as it were evolved by a blind, flawed process that has never embarked to write
Why haven't the good people at GiveWell written more about anti-aging research?
According to GiveWell, the AMF can save a life for $3.4e3. Let's say it's a young life with 5e1 years to live. A year is 3.1e7 seconds, so saving a life gives humanity 1.5e9 seconds, or about 5e5 sec/$.
Suppose you could invest $1e6 in medical research to buy a 50-second increase in global life expectancy. Approximating global population as 1e10, this buys humanity 5e11 seconds, or about the same value of 5e5 sec/$.
Buying a 50-second increase in life expectancy for a megabuck seems very doable. In practice, any particular medical innovation wouldn't give 50 seconds to everyone, but instead would give a larger chunk of time (say, a week) to a smaller number of people suffering from a specific condition. But the math could work out the same.
Of course, it could turn out that the cost of extending humanity's aggregate lifespan with medical research is much more than $5e5/sec. But it could also turn out to be much cheaper than that.
ETA: GiveWell has in fact done a lot of research on this theme, thanks to ChristianKl for pointing this out below.
For AMF it's a lot easier to estimate the effect than it is for anti-aging research. GiveWell purposefully started with a focus on interventions for which the can study the effect.
GiveWell writes:
Medical research : As of November 2011, we are just beginning to consider the cause of medical research. Conceptually, we find this cause promising because it is possible that a relatively small amount spent on research and development could result in new disease-fighting technology that could be used to save and improve many lives throughout the world. However, we do not yet have a good sense of whether this cause has a strong track record of turning charitable dollars into lives saved and improved.
You find a bit of data gathering under http://www.givewell.org/node/1339
More recently GiveWell Labs which then was renamed into the Open Philanthropy project will put more emphasis in that direction.
Articles that were written are:
http://blog.givewell.org/2013/12/26/scientific-research-funding/
Why explore scientific research? We expect it to be a difficult and long-term project to gain competence in scientific research funding.
http://blog.givewell.org/2014/01/07/exploring-life-sciences...
Here they found dopamine to encode some superposed error signals about actual and counterfactual reward:
http://www.pnas.org/content/early/2015/11/18/1513619112.abstract
Could that be related to priors and likelihoods?
Significance
...There is an abundance of circumstantial evidence (primarily work in nonhuman animal models) suggesting that dopamine transients serve as experience-dependent learning signals. This report establishes, to our knowledge, the first direct demonstration that subsecond fluctuations in dopamine concentration in the human striatum combin
I wonder if starting a GiveWell-like organization focused on evaluating the cost-effectiveness of anti-aging research would be a more effective way to fund the most effective anti-aging research than earning-to-give. Attracting a Moskovitz-lever funder would allow us to more than completely fund SENS (provisional on SENS still seeming like the best use of funds after more research was done).
Thoughts this week:
Effective Altruism
(1)
All I want for Christmas...is for someone from the effective altruism movement to take the prospect of using sterile-insect techniques and more advanced gene drives against the Tsetse fly seriously. This might control African Sleeping Sickness, a neglected disease, and more importantly, unlock what is largely suspected to be THE keystone cause, according to GiveWell of malnutrition in Africa through an extensive causal pathway. I feel EA's are getting too stuck into causes that were identified early in the movement a...
As a pampered modern person, the worst part of my life is washing dishes. (Or, rinsing dishes and loading the dish washer.) How long before I can buy a robot to automate this for me?
Short papers get cited more often. Should we believe that the correlation is due to causal factors? Should aspriring researchers keep their titels as short as possible?
The science myths that will not die
False beliefs and wishful thinking about the human experience are common. They are hurting people — and holding back science.
The Strangest, Most Spectacular Bridge Collapse (And How We Got It Wrong)
...Bridge building has been bedeviling humans for a long time, probably since the 1st century. That may explain why, even when they can't carry lots of people or things, bridges are particularly good at carrying lots of meaning: breaking, burning, going too far, going nowhere; the bridges between cultures, across generations, the ones we’ll cross when we come to them. To this day, however, the meanings of Gertie's collapse and that unforgettable footage—"among the most dramatic an
Notes on the Oxford IUT workshop by Brian Conrad
...Since he was asked by a variety of people for his thoughts about the workshop, Brian wrote the following summary. He hopes that a non-specialist may also learn something from these notes concerning the present situation. Forthcoming articles in Nature and Quanta on the workshop will be addressed at the general public. This writeup has the following structure:
Background
What has delayed wider understanding of the ideas?
What is Inter-universal Teichmuller Theory (IUTT = IUT)?
What happened at the confe
This is a kind of repost of something I share on the LW slack.
Someone mentioned that "the ability to be accurately arrogant is good". This was my reply:
...One aspect of arrogance is that it is how some competent people with a high self-esteem are perceived to be. I certainly was often perceived as arrogant. At least I got called that way quite often when I was younger and judging from some recent discussions which heavily reflected on that I probably made that impression for most of my life. I didn't and couldn't understand why. I certainly didn
Here's a letter to an editor.
"The Dec. 6 Wonkblog excerpt “Millions and millions of guns” [Outlook] included a graph that showed that U.S. residents own 357 million firearms, up from about 240 million (estimated from the graph) in 1995, for an increase of about 48 percent. The article categorically stated that “[m]ore guns means more gun deaths.” How many more gun deaths were there because of this drastic increase in guns? Using data from the FBI Uniform Crime Reports, total gun murders went from 13,673 in 1995 to 8,454 in 2013 — a decrease in gun dea...
How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?
Does anyone know of some good program for eye training. I would like to try to become a little less near-sighted by straining to make out things which are at the edge of my range of good vision. I know near-sighted means my eyeball is squashed, but I am hoping my brain can fix a bit of the distortion in software. Currently I am doing random printed out eye charts, and I have gotten a bit better over time, but printing out the charts is tedious.
Well, it depends on what you mean by "rationality". Here's something I posted in 2014, slightly revised:
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims:
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, and correct predictions enable goal-accomplishing actions. And the way to have correct beliefs is to update your beliefs when their predictions fail.
We can call these the rules of Bayes' world, the world in which updating and prediction are effective at accomplishing human goals. But Bayes' world is not the only imaginable world. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?
To be clear, I'm not talking about alternatives to Bayesian probability as a mathematical or engineering tool. I'm talking about imaginable worlds in which Bayesian probability is not a good model for human knowledge and action.
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it.
In the world of heroic myth, it is not oracles (good predictors) but rather heroes and villains (strong-willed people) who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Oracles possess the truth to arbitrary precision, but they accomplish nothing by it. Heroes and villains come to their predicted triumphs or fates not by believing and making use of prediction, but by ignoring or defying it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the external world are relatively close to our priors; not much updating is needed there — but our goals are not known to us initially. In fact, we may be thoroughly deceived about what our goals are, or what satisfying them would look like.
We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. In this world, when we choose actions that are unsatisfactory, it isn't so much because we are acting on faulty beliefs about the external world, but because we are pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways: We might imagine Lovecraft's world (knowledge causes suffering), Qoheleth's world (maybe similar to Buddha's), Job's world, or Nietzsche's world.
Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are actually in one of the others?
This is a perspective I hadn't seen mentioned before and helps me understand why a friend of mine gives low value to the goal-oriented rationality material I've mentioned to him.
Thank you very much for this post!
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.