"Announcing" the "Longevity for All" Short Movie Prize
The local Belgian/European life-extension non-profit Heales is giving away prizes for whoever can make an interesting short movie about life extension. The first prize is €3000 (around $3386 as of today), other prizes being various gifts. You more or less just need to send a link pointing to the uploaded media along with your contact info to info@heales.org once you're done.
While we're at it you don't need to be European, let alone Belgian to participate, and it doesn't even need to be a short movie anyway. For instance a comic strip would fall within the scope of the rules as specified here : (link to a pdf file)(or see this page on fightaging.org). Also, sure, the deadline is by now supposed to be a fairly short-term September the 21st, 2015, but it is extremely likely this will be extended (this might be a pun).
I'll conclude by suggesting you read the official pdf with rules and explanations if you feel like you care about money or life-extension (who doesn't ?), and remind everyone of what happened last time almost everyone thought they shouldn't grab free contest money that was announced on Lesswrong (hint : few enough people participated that all earned something). The very reason why this one's due date will likely be extended is because (very very) few people have participated so far, after all.
(Ah yes, the only caveat I can think of is that if the product of quality by quantity of submissions is definitely too low (i.e. it's just you on the one hand and on the other hand that one guy who spent 3 minutes drawing some stick figures, and your submission is coming a close second), then the contest may be called off after one or two deadline extensions (also in the aforementioned rules).).
I played the AI Box Experiment again! (and lost both games)
I have won a second game of AI box against a gatekeeper who wished to remain Anonymous.
This puts my AI Box Experiment record at 3 wins and 3 losses.
I attempted the AI Box Experiment again! (And won - Twice!)
Summary
Furthermore, in the last thread I have asserted that
Rather than my loss making this problem feel harder, I've become convinced that rather than this being merely possible, it's actually ridiculously easy, and a lot easier than most people assume.
It would be quite bad for me to assert this without backing it up with a victory. So I did.
Ps: Bored of regular LessWrong? Check out the LessWrong IRC! We have cake.
[LINK] Soylent crowdfunding
Rob Rhinehart's food replacement Soylent now has a crowdfunding campaign.
Soylent frees you from the time and money spent shopping, cooking and cleaning, puts you in excellent health, and vastly reduces your environmental impact by eliminating much of the waste and harm coming from agriculture, livestock, and food-related trash.
If you're interested in one or more of these benefits, send in some money! There is also a new blog post.
Empirical claims, preference claims, and attitude claims
What do the following statements have in common?
- "Atlas Shrugged is the best book ever written."
- "You break it, you buy it."
- "Earth is the most interesting planet in the solar system."
My answer: None of them are falsifiable claims about the nature of reality. They're all closer to what one might call "opinions". But what is an "opinion", exactly?
There's already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful. This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical? The idea here is similar to the idea behind anti-virus software: Even if you can't rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.
Why is it useful to be able to be able to flag non-empirical claims? Well, for one thing, you can believe whatever you want about them! And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.
Always check your assertions... (Winning the Lottery)
"In 2005, Dr. Zhang was having an ongoing discussion with friends about the Lottery, with Dr. Zhang taking the view that it offered poor odds and was a tax mainly on poor people. To bolster his argument, he began analyzing the Massachusetts Lottery’s various games. But when he got to Cash WinFall, he was shocked to find that during roll-down drawings the odds were in the bettor’s favor."
Full story here - it's rather engrossing.
What have you recently tried, and failed at?
Kaj Sotala said:
[I]f you punish yourself for trying and failing, you stop wanting to try in the first place, as it becomes associated with the negative emotions. Also, accepting and being okay with the occasional failure makes you treat it as a genuine choice where you have agency, not something that you're forced to do against your will.
So maybe we should celebrate failed attempts more often ... I for one can't think of anything I've failed at recently, which is probably a sign that I'm not trying enough new things.
So, what specific things have you failed at recently?
Rationality and Winning
Someone who claims to have read "the vast majority" of the Sequences recently misinterpreted me to be saying that I "accept 'life success' as an important metric for rationality." This may be a common confusion among LessWrongers due to statements like "rationality is systematized winning" and "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility."
So, let me explain why Actual Winning isn't a strong measure of rationality.
In cognitive science, the "Standard Picture" (Stein 1996) of rationality is that rationality is a normative concept defined by logic, Bayesian probability theory, and Bayesian decision theory (aka "rational choice theory"). (Also see the standard textbooks on judgment and decision-making, e.g. Thinking and Deciding and Rational Choice in an Uncertain World.) Oaksford & Chater (2012) explain:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
Thus, one could have highly rational beliefs and make highly rational choices and still fail to win due to akrasia, lack of resources, lack of intelligence, and so on. Like intelligence and money, rationality is only a ceteris paribus predictor of success.
So while it's empirically true (Stanovich 2010) that rationality is a predictor of life success, it's a weak one. (At least, it's a weak predictor of success at the levels of human rationality we are capable of training today.) If you want to more reliably achieve life success, I recommend inheriting a billion dollars or, failing that, being born+raised to have an excellent work ethic and low akrasia.
The reason you should "be careful… any time you find yourself defining the [rationalist] as someone other than the agent who is currently smiling from on top of a giant heap of utility" is because you should "never end up envying someone else's mere choices." You are still allowed to envy their resources, intelligence, work ethic, mastery over akrasia, and other predictors of success.
Can't Pursue the Art for its Own Sake? Really?
Can anyone tell me why it is that if I use my rationality exclusively to improve my conception of rationality I fall into an infinite recursion? EY say's this in The Twelve Virtues and in Something to Protect, but I don't know what his argument is. He goes as far as to say that you must subordinate rationality to a higher value.
I understand that by committing yourself to your rationality you lose out on the chance to notice if your conception of rationality is wrong. But what if I use the reliability of win that a given conception of rationality offers me as the only guide to how correct that conception is. I can test reliability of win by taking a bunch of different problems with known answers that I don't know, solving them using my current conception of rationality and solving them using the alternative conception of rationality I want to test, then checking the answers I arrived at with each conception against the right answers. I could also take a bunch of unsolved problems and attack them from both conceptions of rationality, and see which one I get the most solutions with. If I solve a set of problems with one, that isn't a subset of the set of problems I solved with the other, then I'll see if I can somehow take the union of the two conceptions. And, though I'm still not sure enough about this method to use it, I suppose I could also figure out the relative reliability of two conceptions by making general arguments about the structures of those conceptions; if one conception is "do that which the great teacher says" and the other is "do that which has maximal expected utility", I would probably not have to solve problems using both conceptions to see which one most reliably leads to win.
And what if my goal is to become as epistimically rational as possible. Then I would just be looking for the conception of rationality that leads to truth most reliably. Testing truth by predictive power.
And if being rational for its own sake just doesn't seem like its valuable enough to motivate me to do all the hard work it requires, let's assume that I really really care about picking the best conception of rationality I know of, much more than I care about my own life.
It seems to me that if this is how I do rationality for its own sake — always looking for the conception of goal-oriented rationality which leads to win most reliably, and the conception of epistemic rationality which leads to truth most reliably — then I'll always switch to any conception I find that is less mistaken than mine, and stick with mine when presented with a conception that is more mistaken, provided I am careful enough about my testing. And if that means I practice rationality for its own sake, so what? I practice music for its own sake too. I don't think that's the only or best reason to pursue rationality, certainly some other good and common reasons are if you wanna figure something out or win. And when I do eventually find something I wanna win or figure out that no one else has (no shortage of those), if I can't, I'll know that my current conception isn't good enough. I'll be able to correct my conception by winning or figuring it out, and then thinking about what was missing from my view of rationality that wouldn't let me do that before. But that wouldn't mean that I care more about winning or figuring some special fact than I do about being as rational as possible; it would just mean that I consider my ability to solve problems a judge of my rationality.
I don't understand what I loose out on if I pursue the Art for its own sake in the way described above. If you do know of something I would loose out on, or if you know Yudkowsky's original argument showing the infinite recursion when you motivate yourself to be rational by your love of rationality, then please comment and help me out. Thanks ahead of time.
[Link] “How to seem good at everything: Stop doing stupid shit”
Possibly interesting article on winning: How to seem good at everything: Stop doing stupid shit
Summary, as I interpreted it: In practicing a skill, focus on increasing the minimum of the quality of the individual actions comprising performing the skill (because that is the greatest marginal benefit).
[This article previously posted as an open thread comment.]
The basic questions of rationality
I've been on Less Wrong since its inception, around March 2009. I've read a lot and contributed a lot, and so now I'm more familiar with our jargon, I know of a few more scientific studies, and I might know a couple of useful tricks. Despite all my reading, however, I feel like I'm a far cry from learning rationality. I'm still a wannabe, not an amateur. Less Wrong has tons of information, but I feel like I haven't yet learned the answers to the basic questions of rationality.
I, personally, am a fan of the top-down approach to learning things. Whereas Less Wrong contains tons of useful facts that could, potentially, be put together to answer life's important questions, I really would find it easier if we started with the important questions, and then broke those down into smaller pieces that can be answered more easily.
And so, that's precisely what I'm going to do. Here are, as far as I can tell, the basic questions of rationality—the questions we're actually trying to answer here—along with what answers I've found:
Q: Given a question, how should we go about answering it? A: By gathering evidence effectively, and correctly applying reason and intuition.
- Q: How can we effectively gather relevant evidence? A: I don't know. (Controlled experiments? Asking people?)
- Q: How can we correctly apply reason? A: If you have infinite computational resources available, use probability theory.
- Q: We don't have infinite computational resources available, so what now? A: I don't know. (Apply Bayes' rule anyway? Just try to emulate what a hypercomputer would do?)
- Q: How can we successfully apply intuition? A: By repairing our biases, and developing habits that point us in the right direction under specific circumstances.
- Q: How can we find our biases? A: I don't know. (Read Less Wrong? What about our personal quirks? How can we notice those?)
- Q: Once we find a bias, how can we fix it? A: I don't know. (Apply a correction, test, repeat? Figure out how the bias feels?)
- Q: How can we find out what habits would be useful to develop? A: I don't know. (Examine our past successes and rationalize them?)
- Q: Once we decide on a habit, how can we develop it? A: I don't know. (Sheer practice?)
[Altruist Support] How to determine your utility function
Follows on from HELP! I want to do good.
What have I learned since last time? I've learned that people want to see an SIAI donation; I'll do it as soon as PayPal will let me. I've learned that people want more "how" and maybe more "doing"; I'll write a doing post soon, but I've got this and two other background posts to write first. I've learned that there's a nonzero level of interest in my project. I've learned that there's a diversity of opinions; it suggests if I'm wrong, then I'm at least wrong in an interesting way. I may have learned that signalling low status - to avoid intimidating outsiders - may be less of a good strategy than signalling that I know what I'm talking about. I've learned that I am prone to answering a question other than that which was asked.
Somewhere in the Less Wrong archives there is a deeply shocking, disturbing post. It's called Post Your Utility Function.
It's shocking because basically no-one had any idea. At the time I was still learning but I knew that having a utility function was important - that it was what made everything else make sense. But I didn't know what mine was supposed to be. And neither, apparently, did anyone else.
Eliezer commented 'in prescriptive terms, how do you "help" someone without a utility function?'. This post is an attempt to start to answer this question.
Firstly, what the utility function is and what it's not. It belongs to the field of instrumental rationality, not epistemic rationality; it is not part of the territory. Don't expect it to correspond to something physical.
Also, it's not supposed to model your revealed preferences - that is, your current behavior. If it did then it would mean you were already perfectly rational. If you don't feel that's the case then you need to look beyond your revealed preferences, toward what you really want.
In other words, the wrong way to determine your utility function is to think about what decisions you have made, or feel that you would make, in different situations. In other words, there's a chance, just a chance, that up until now you've been doing it completely wrong. You haven't been getting what you wanted.
So in order to play the utility game, you need humility. You need to accept that you might not have been getting what you want, and that it might hurt. All those little subgoals, they might just have been getting you nowhere more quickly.
So only play if you want to.
The first thing is to understand the domain of the utility function. It's defined over entire world histories. You consider everything that has happened, and will happen, in your life and in the rest of the world. And out of that pops a number. That's the idea.
This complexity means that utility functions generally have to be defined somewhat vaguely. (Except if you're trying to build an AI). The complexity will also allow you a lot of flexibility in deciding what you really value.
The second thing is to think about your preferences. Set up some thought experiments to decide whether you prefer this outcome or that outcome. Don't think about what you'd actually do if put in a situation to decide between them; then you will worry about the social consequences of making the "unethical" decision. If you value things other than your own happiness, don't ask which outcome you'd be happier in. Instead just ask, which outcome seems preferable?. Which would you consider good news, and which bad news?
You can start writing things down if you like. One of the big things you'll need to think about is how much you value self versus everyone else. But this may matter less than you think, for reasons I'll get into later.
The third thing is to think about preferences between uncertain outcomes. This is somewhat technical, and I'd advise a shut-up-and-multiply approach. (You can try and go against that if you like, but you have to be careful not to end up in weirdness such as getting different answers if you phrase something as one big decision or as a series of identical little decisions).
The fourth thing is to ask whether this preference system satisfies the von Neumann-Morgenstern axioms. If it's at all sane, it probably will. (Again, this is somewhat technical).
The last thing is to ask yourself: if I prefer outcome A over outcome B, do I want to act in such a way that I bring about outcome A? (continue only if the answer here is "yes").
That's it - you now have a shiny new utility function. And I want to help you optimize it. (Though it can grow and develop and change along with yourself; I want this to be a speculative process, not one in which you suddenly commit to an immutable life goal).
You probably don't feel that anything has changed. You're probably feeling and behaving exactly the same as you did before. But this is something I'll have to leave for a later post. Once you start really feeling that you want to maximize your utility then things will start to happen. You'll have something to protect.
Oh, you wanted to know my utility function? It goes something like this:
It's the sum of the things I value. Once a person is created, I value that person's life; I also value their happiness, fun and freedom of choice. I assign negative value to that person's disease, pain and sadness. I value concepts such as beauty and awesomeness. I assign a large bonus negative value to the extinction of humanity. I weigh the happiness of myself and those close to me more highly than that of strangers, and this asymmetry is more pronounced when my overall well-being becomes low.
Four points: It's actually going to be a lot more complicated than that. I'm aware that it's not quantitative and no terminology is defined. I'm prepared to change it if someone points out a glaring mistake or problem, or if I just feel like it for some reason. And people should not start criticizing my behavior for not adhering to this, at least not yet. (I have a lot of explaining still to do).
HELP! I want to do good
There are people out there who want to do good in the world, but don't know how.
Maybe you are one of them.
Maybe you kind of feel that you should be into the "saving the world" stuff but aren't quite sure if it's for you. You'd have to be some kind of saint, right? That doesn't sound like you.
Maybe you really do feel it's you, but don't know where to start. You've read the "How to Save the World" guide and your reaction is, ok, I get it, now where do I start? A plan that starts "first, change your entire life" somehow doesn't sound like a very good plan.
All the guides on how to save the world, all the advice, all the essays on why cooperation is so hard, everything I've read so far, has missed one fundamental point.
If I could put it into words, it would be this:
AAAAAAAAAAAGGGHH WTF CRAP WHERE DO I START EEK BLURFBL
If that's your reaction then you're half way there. That's what you get when you finally grasp how much pointless pain, misery, risk, death there is in the world; just how much good could be done if everyone would get their act together; just how little anyone seems to care.
If you're still reading, then maybe this is you. A little bit.
And I want to help you.
How will I help you? That's the easy part. I'll start a community of aspiring rationalist do-gooders. If I can, I'll start it right here in the comments section of this post. If anything about this post speaks to you, let me know. At this point I just want to know whether there's anybody out there.
And what then? I'll listen to people's opinions, feelings and concerns. I'll post about my worldview and invite people to criticize, attack, tear it apart. Because it's not my worldview I care about. I care about making the world better. I have something to protect.
The posts will mainly be about what I don't see enough of on Less Wrong. About reconciling being rational with being human. Posts that encourage doing rather than thinking. I've had enough ideas that I can commit to writing 20 discussion posts over a reasonable timescale, although some might be quite short - just single ideas.
Someone mentioned there should be a "saving the world wiki". That sounds like a great idea and I'm sure that setting one up would be well within my power if someone else doesn't get around to it first.
But how I intend to help you is not the important part. The important part is why.
To answer that I'll need to take a couple of steps back.
Since basically forever, I've had vague, guilt-motivated feelings that I ought to be good. I ought to work towards making the world the place I wished it would be. I knew that others appeared to do good for greedy or selfish reasons; I wasn't like that. I wasn't going to do it for personal gain.
If everyone did their bit, then things would be great. So I wanted to do my bit.
I wanted to privately, secretively, give a hell of a lot of money to a good charity. So that I would be doing good and that I would know I wasn't doing it for status or glory.
I started small. I gave small amounts to some big-name charities, charities I could be fairly sure would be doing something right. That went on for about a year, with not much given in total - I was still building up confidence.
And then I heard about GiveWell. And I stopped giving. Entirely.
WHY??? I can't really give a reason. But something just didn't seem right to me. People who talked about GiveWell also tended to mention that the best policy was to give only to the charity listed at the top. And that didn't seem right either. I couldn't argue with the maths, but it went against what I'd been doing up until that point and something about that didn't seem right.
Also, I hadn't heard of GiveWell or any of the charities they listed. How could I trust any of them? And yet how could I give to anyone else if these charities were so much more effective? Big akrasia time.
It took a while to sink in. But when it did, I realised that my life so far had mostly been a waste of time. I'd earned some money, but I had no real goals or ambitions. And yet, why should I care if my life so far had been wasted? What I had done in the past was irrelevant to what I intended to do in the future. I knew what my goal was now and from that a whole lot became clear.
One thing mattered most of all. If I was to be truly virtuous, altruistic, world-changing then I shouldn't deny myself status or make financial sacrifices. I should be completely indifferent to those things. And from that the plan became clear: the best way to save the world would be to persuade other people to do it for me. I'm still not entirely sure why they're not already doing it, but I will use the typical mind prior and assume that for some at least, it's for the same reasons as me. They're confused. And that to carry out my plan I won't need to manipulate anyone into carrying out my wishes, but simply help them carry out their own.
I could say a lot more and I will, but for now I just want to know. Who will be my ally?
Insufficiently Awesome
Apologies for the wasted time spent reading and replying to this post. Please disregard it.
I've been feeling non-awesome for a long time. I don't know if anyone else here feels the same way, but I'm going to assume that at least a few people do. I want to correct this horrible deficiency.
We already have the LW meetups in a lot of places, monthly in some places and weekly in others. I've gone to a few, and they're interesting and I get to meet a lot of very smart people (and get intimidated by them)... but mostly all we've done is talk and sometimes go and eat at a restaurant. I want more than this!
We already talk, we need an action-based meetup. I want to propose another kind of meetup, the Insufficiently Awesome meetup. It should aim to make us good at baseline things like fitness, social skills, strategy, and reflexes, and to make us very good at specialized awesome things like master-level chess/go/shogi, public speaking, various sports, dancing, making music, making art.
I think this meetup should be daily, though not everyone would want to go every day. Nonetheless, we should have something happening every day that we're not spending talking. The goal shouldn't be just to be fit in different situations, but to instead become totally awesome.
Is there anyone else that feels the same? If so, what things do you think we need to learn for the baseline, and what things should we get very good at?
Link: Paul Graham on intelligence vs determination
Paul Graham of Y-Combinator on picking winners-at-life:
Paul Graham spills: Why some companies get his cash and others don't
What's most essential for a successful startup?
The founders. We've learned in the six years of doing Y Combinator to look at the founders--not the business ideas--because the earlier you invest, the more you're investing in the people. When Bill Gates was starting Microsoft, the idea that he had then involved a small-time microcomputer called the Altair. That didn't seem very promising, so you had to see that this 19-year-old kid was going places.What do you look for?
Determination. When we started, we thought we were looking for smart people, but it turned out that intelligence was not as important as we expected. If you imagine someone with 100 percent determination and 100 percent intelligence, you can discard a lot of intelligence before they stop succeeding. But if you start discarding determination, you very quickly get an ineffectual and perpetual grad student.
Optimal Employment Open Thread
Related to: Optimal Employment, Best career models for doing research?, (Virtual) Employment Open Thread
In Optimal Employment Louie discussed some biases that lead people away from optimal employment, and gave working in Australia as an option for such employment. What are some other options?
Your optimal employment will depend on how much you care about a variety of things (free time, money, etc.) so when discussing options it might be helpful to say what you're trying to optimize for.
In addition to proposing options we could list resources that might be helpful for generating or implementing options.
A Possible Solution to Parfit's Hitchiker
I had what appeared to me to be a bit of insight regarding trade between selfish agents. I disclose that I have not read TDT or any books on decision theory, so what I say may be blatantly incorrect. However, I judged that posting this here was of higher utility rather than waiting until I had read up on decision theory -- I have no intention of reading up on decision theory any time soon because I have more important (to me) things to do. This is not meant to deter criticism of the post itself -- please tell me why I'm wrong if I am. The following paragraph is primarily an introduction.
When a rational agent predicts that he is interacting with another rational agent and that the other agent has motive for deceiving him, (and both have a large amount of computing power), he will not use any emotional basis for ‘trust.’ Instead, he will see the other agent’s commitments as truth claims which may be true or false depending what action will optimize the other agent’s utility function at the time which the commitment is to be fulfilled. Agents which know something of the each other’s utility function may bargain directly on such terms, even when each of their utility functions are largely (or completely) dominated by selfishness.
This leads to a solution to Parfit’s hitchhiker, allowing selfish agents to precommit to future trade. Give Ekman all of your clothes and state that you will buy them back from him when you arrive with an amount higher than the worth of your clothes to him but lower than the worth of your clothes to yourself. Furthermore, tell him that because you don’t have anything more on you, he can’t get any more money off of you than an amount infinitesimally smaller than your clothes are worth to you, and accurately tell him how much your clothes are worth to yourself (you must tell the truth here due to his microexpression-reading capability.) He should judge your words as truth, considering that you have told the truth. Of course, you lose regardless if the value of your clothes to yourself is less than the utility he loses by taking you to town.
Assumptions made regarding Parfit's hitchhiker: 1. Physical assault is judged to be of very low utility by both agents and so isn't a factor in the problem. 2. Trades in the present time may be executed without prompting an infinite cycle of "No, you give me X first."
"Target audience" size for the Less Wrong sequences
[Note: My last thread was poorly worded in places and gave people the wrong impression that I was interested in talking about growing and shaping the Less Wrong community. I was really hoping to talk about something a bit different. Here's my revision with a completely redone methodology.]
How many people would invest their time to read the LW sequences if they were introduced to them?
So in other words, I’m trying to estimate the theoretical upper-bound on the number of individuals world-wide who have the ability, desire, and time to read intellectual material online and who also have at least some pre-disposition to wanting to think rationally.
I’m not trying to evangelize to unprepared, “reach” candidates who maybe, possibly would like to read parts of the sequences. I’m just looking for likely size of the core audience who already has the ability, the time, and doesn’t need to jump through any major hoops to stomach the sequences (like deconverting from religion or radically changing their habits -- like suddenly devoting more of their time to using computers or reading.)
The reason I’m investigating this is because I want to build more rationalists. I know some smart people whose opinions I respect (like Michael Vassar) who contend we shouldn’t spend much time trying to reach more people with the sequences. They think the majority of people smart enough to follow the sequences and who do weird, eccentric things like “read in their spare time”, are already here. This is my second attempt to figure this out in the last couple days, and unlike my rough 2M person figure I got with my previous, hasty analysis, this more detailed analysis leaves me with a much lower world-wide target audience of only 17,000.
Filter |
Total Population |
Filters Away (%) |
Everyone |
6,880,000,000 |
|
Speaks English + Internet Access |
536,000,000 |
92.2% |
Atheist/Agnostic |
40,000,000 |
92.55% |
Believes in evolution | Atheist/Agnostic |
30,400,000 |
24% |
“NT” (Rational) MBTI |
3,952,000 |
87% |
IQ 130+ (SD 15; US/UK-Atheist-NT 108 IQ) |
284,544 |
92.8% |
30 min/day reading or on computers |
16,930 |
94.05% |
Yep, that’s right. There are basically only a few thousand relatively bright people in the world who think reason makes sense and devote at least 2% of their day to arcane activities like “reading” and "using computers".
Considering we have 6,438 Less Wrong logins created and a daily readership of around 5,500 people between logged in and anonymous readers, I now actually find it believable that we may have already reached a very large fraction of all the people in the world who we could theoretically convince to read the sequences.
This actually matters because it makes me update in favor of different, more realistic growth strategies than buying AdWords or doing SEO to try and reach the small number of people left in our current target audience. Like translating the sequences into Chinese. Or creating an economic disaster that leaves most of the Westerner world unemployed (kidding!). Or waiting until Eliezer publishes his rationality book so that we can reach the vast majority of our potential, future audience who currently still reads but doesn’t have time to do anti-social, low-prestige things like “reading blogs”.
For those of you who want to consider my methodology, here’s the rationale for each step that I used to disqualify potential sequence readers:
Doesn’t Speak English or have Internet Access: The sequences are English-only (right now) and online-only (right now). Don’t think there’s any contention here. This figure is the largest of the 3 figures I've found but all were around 500,000,000.
Not Atheist/Agnostic: Not being an Atheist or Agnostic is a huge warning sign. 93% of LW is atheist/agnostic for a reason. It’s probably a combo of 1) it’s hard to stomach reading the sequences if you’re a theist, and 2) you probably don’t use thinking to guide the formation of your beliefs anyway so lessons in rationality are a complete waste of time for you. These people really needs to have the healing power of Dawkins come into their hearts before we can help them. Also, note that even though it wasn't mentioned in Yvain's top-level survey post, the raw data showed that around 1/3rd of LW users who gave a reason for participating on LW cite "Atheism".
Evolution denialist: If you can’t be bothered to be moved to correct beliefs about the second most obvious conclusion in the world by the mountains of evidence in favor of it, you’re effectively saying you don’t think induction or science can work at all. These people also need to go through Dawkins before we can help them.
Not “NT” on the Myers-Briggs typology: Lots of people complain about the MBTI. But in this case, I don’t think it matters that the MBTI isn’t cleaving reality perfectly at the joints or that these types aren’t natural categories. I realize Jung types aren’t made of quarks and aren’t fundamental. But I’ve also met lots of people at the Less Wrong meet-ups. There’s an even split of E/I and P/J in our community. But there is a uniform, overwhelmingly strong disposition towards N and T. And we shouldn’t be surprised by this at all. People who are S instead of N take things at face value and resist using induction or intuition to extend their reasoning. These people can guess the teacher’s password, but they're not doing the same thing that you call "thinking". And if you’re not a T (Thinking), then that means you’re F (Feeling). And if you’re using feelings to chose beliefs in lieu of thinking, there’s nothing we can do for you -- you’re permanently disqualified from enjoying the blessings of rationality. Note: I looked hard to see if I could find data suggesting that being NT and being Atheist correlated because I didn’t want to “double subtract” out the same people twice. It turns out several studies have looked for this correlation with thousands of participants... and it doesn’t exist.
Lower than IQ 130: Another non-natural category that people like to argue about. Plus, this feels super elitist, right? Excluding people just because they're "not smart enough". But it’s really not asking that much when you consider that IQ 100 means you’re buying lottery tickets, installing malware on your computer, and spending most of your free time watching TV. Those aren’t the “stupid people” who are way down on the other side of the Gaussian -- that’s what a normal 90 - 110 IQ looks like. Real stupid is so non-functional that you never even see it... probably because you don’t hang out in prisons, asylums and homeless shelters. Really. And 130 isn’t all that “special“ once you find yourself being a white (+6IQ) college graduate (+5IQ) atheist (+4IQ) who's ”NT” on Myers-Briggs (+5IQ). In Yvain’s survey, the average IQ on LW was 145.88. And only 4 out of 68 LWers reported IQs below 130... the lowest being 120. I find it inconceivable that EVERYONE lied on this survey. I also find it highly unlikely that only the top 1/2 reported. But even if everyone who didn’t report was as low as the lowest IQ reported by anyone on Less Wrong, the average IQ would still be over 130. Note: I took the IQ boost from being atheist and being MBTI-“N” into account when figuring out the proportion of 130+ IQ conditional on the other traits already being factored in.
Having no free time: So you speak English, you don’t hate science, you don’t hate reason, and you’re somewhat bright. Seem like you’re a natural part of our target audience, right? Nope... wrong! There’s at least one more big hurdle: Having some free time. Most people who are already awesome enough to have passed through all these filters are winning so hard at life (by American standards of success) that they are wayyy too busy to do boring, anti-social & low-prestige tasks like reading online forums in their spare time (which they don’t have much of). In fact, it’s kind of like how knowing a bit about biases can hurt you and make you even more biased. Being a bit rational can skyrocket you to such a high level of narrowly-defined American-style "success" that you become a constantly-busy, middle-class wage-slave who zaps away all your free time in exchange for a mortgage and a car payment. Nice job buddy. Thanks for increasing my GDP epsilon%... now you are left with whatever rationality you started out with minus the effects of your bias dragging you back down to average over the ensuing years. The only ways I see out of this dilemma are 1) being in a relatively unstructured period of your life (ie, unemployed, college student, semi-retired, etc) or 2) having a completely broken motivation system which keeps you in a perpetually unstructured life against your will (akrasia) or perhaps 3) being a full-time computer professional who can multi-task and pass off reading online during your work day as actually working. That said, if you're unlucky enough to have a full-time job or you’re married with children, you’ve already fallen out of the population of people who read or use computers at least 30 minutes / day. This is because having a spouse cuts your time spent reading and using computers in half. Having children cuts reading in half and reduces computer usage by 1/3rd. And having a job similarly cuts both reading and computer usage in half. Unfortunately, most people suffer from several of these afflictions. I can’t find data that’s conditional on being an IQ 130+ Atheist but my educated guess is employment is probably much better than average due to being so much more capable and I’d speculate that relationships and children are about the same or perhaps a touch lower. All things equal, I think applying statistics from the general US civilian population and extrapolating is an acceptable approximation in this situation even if it likely overestimates the number of people who truly have 30 minutes of free time / day (the average amount of time needed just to read LW according to Yvain’s survey). 83% of people are employed full-time so they’re gone. Of the remaining 17% who are unemployed, 10% of the men and 50% of the women are married and have children so that’s another 5.1% off the top level leaving only 11.9% of people. Of that 11.9% left, the AVERAGE person has 1 hour they spend reading and ”Playing games and computer use for leisure“. Let’s be optimistic and assume they somehow devote half of their entire leisure budget to reading Less Wrong, that still only leaves 5.95%. Note: These numbers are a bit rough. If someone wants to go through the micro-data files of the US Time Use Survey for me and count the exact number of people who do more than 1 hour of "reading" and "Playing games and computer use for leisure", I welcome this help.
Anyone have thoughtful feedback on refinements or additional filters I could add to this? Do you know of better sources of statistics for any of the things I cite? And most importantly, do you have new, creative outreach strategies we could use now that we know this?
Article on quantified lifelogging (Slate.com)
Data for a Better Planet focuses on The Quantified Self, and offers an overview of the state of the art in detailed, quantitative personal tracking.
This seems related to an LW interest cluster.
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)