Article on IQ: The Inappropriately Excluded
I saw an article on high IQ people being excluded from elite professions. Because the site seemed to have a particular agenda related to the article, I wanted to check here for other independent supporting evidence for the claim.
Their fundamental claim seems to be that P(elite profession|IQ) peaks at 133 and decreases thereafter, and goes do to 3% of peak at 150. If true, I'd find that pretty shocking.
They indicate this diminishing probability of "success" at the high tail of the IQ distribution as a known effect. Anyone got other studies on this?
By dividing the distribution function of the elite professions' IQ by that of the general population, we can calculate the relative probability that a person of any given IQ will enter and remain in an intellectually elite profession. We find that the probability increases to about 133 and then begins to fall. By 140 it has fallen by about 1/3 and by 150 it has fallen by about 97%. In other words, for some reason, the 140s are really tough on one's prospects for joining an intellectually elite profession. It seems that people with IQs over 140 are being systematically, and likely inappropriately, excluded.
The map of the methods of optimisation (types of intelligence)

Willpower Schedule
TL;DR: your level of willpower depends on how much willpower you expect to need (hypothesis)
Time start: 21:44:55 (this is my third exercise in speed writing a LW post)
I.
There is a lot of controversy about how our level of willpower is affected by various factors, including doing "exhausting" tasks before, as well as being told that willpower is a resource that depletes easily, or doesn't etc.
(sorry, I can't go look for references - that would break the speedwriting exercise!)
I am not going to repeat the discussions that already cover those topics; however, I have a new tentative model which (I think) fits the existing data very well, is easy to test, and supersedes all previous models that I have seen.
II.
The idea is very simple, but before I explain it, let me give a similar example from a different aspect of our lives. The example is going to be concerned with, uh, poo.
Have you ever noticed that (if you have a sufficiently regular lifestyle), conveniently you always feel that you need to go to the toilet at times when it's possible to do so? Like for example, how often do you need to go when you are on a bus, versus at home or work?
The function of your bowels is regulated by reading subconscious signals about your situation - e.g. if you are stressed, you might become constipated. But it is not only that - there is a way in which it responds to your routines, and what you are planning to do, not just the things that are already affecting you.
Have you ever had the experience of a background thought popping up in your mind that you might need to go within the next few hours, but the time was not convenient, so you told that thought to hold it a little bit more? And then it did just that?
III.
The example from the previous section, though possibly quite POOrly choosen (sorry, I couldn't resist), shows something important.
Our subconscious reactions and "settings" of our bodies can interact with our conscious plans in a "smart" way. That is, they do not have to wait to see the effects of what you are doing, to adjust to it - they can pull information from your conscious plans and adjust *before*.
And this is, more or less, the insight that I have added to my current working theory of willpower. It is not very complicated, but perhaps non-obvious. Sufficiently non-obvious that I don't think anyone has suggested it before, even after seeing experimental results that match this excellently.
IV.
To be more accurate, I claim that how much willpower you will have depends on several important factors, such as your energy and mood, but it also depends on how much willpower you expect to need.
For example, if you plan to have a "rest day" and not do any serious work, you might find that you are much less *able* to do work on that day than usual.
It's easy enough to test - so instead of arguing this theoretically, please do just that - give it a test. And make sure to record your levels of willpower several times a day for some time - you'll get some useful data!
Time end: 20:00:53. Statistics: 534 words, 2924 characters, 15.97 minutes, 33.4 wpm, 183.1 cpm
Corrigibility through stratified indifference
A putative new idea for AI control; index here.
Corrigibility through indifference has a few problems. One of them is that the AI is indifferent between the world in which humans change its utility to v, and world in which humans try to change its utility, but fail.
Now the try-but-fail world is going to be somewhat odd - humans will be reacting by trying to change the utility again, trying to shut the AI down, panicking that a tiny probability event has happened, and so on.
Seeking Optimization of New Website "New Atheist Survival Kit," a go-to site for newly-made atheists
I've put together a website, "New Atheist Survival Kit" at atheistkit.wordpress.com
The idea is to help new atheists come to terms with their change in belief, and also invite them to become more than atheists: rationalists.
And if it helps theists become atheists, too, and helps old atheists become rationalists, more the better.
The bare bones of it are all in place now. Once a few people have gone over it, for editing, and for advice about what to include, leave out, improve, re-organize, whatever, I'll ask a bunch of atheist and rationalist communities to write up their own blurb for us to include in a list of communities that we'll point people to in the "Atheist Communities" or "Thinker's Communities" sections on the main menu.
It includes my rough draft attempt to basically bring down the Metaethics sequence to a few thousand words and make it stylistically and conceptually accessible to a mass audience, which I could especially use some help with.
So, for now, I'm here to ask that anyone interested check it out, and message me any improvements they think worth making, from grammar and spelling all the way up to what content to include, or how to present things.
Thanks to all for any help.
Help with Bayesian priors
I posted before about an open source decision making web site I am working on called WikiLogic. The site has a 2 minute explanatory animation if you are interested. I wont repeat myself but the tl;dr is that it will follow the Wikipedia model of allowing everyone to collaborate on a giant connected database of arguments where previously established claims can be used as supporting evidence for new claims.
The raw deduction element of it works fine and would be great in a perfect world where such a thing as absolute truths existed, however in reality we normally have to deal with claims that are just the most probable. My program allows opposing claims to be connected and then evidence to be gathered for each. The evidence will create a probability of it being correct and which ever is highest, gets marked as best answer. Principles such as Occams Razor are applied automatically as long list of claims used as evidence will be less likely as each claim will have its own likelihood which will dilute its strength.
However, my only qualification in this area is my passion and I am hitting a wall with some basic questions. I am not sure if this is the correct place to get help with these. If not, please direct me somewhere else and I will remove the post.
The arbitrarily chosen example claim I am working with is whether “Alexander the Great existed”. This has the useful properties of 1: an expected outcome (that he existed - although, perhaps my problem is that this is not the case!) and 2: it relies heavily on probability as there is little solid evidence.
One popular claim is that coins were minted with his face on them. I want to use Bayes to find how likely a face appearing on a coin is for someone who existed. As I understand it, there should be 4 combinations:
- Existed; Had a coin minted
- Existed; Did not have a coin minted
- No Existed; Had a coin minted
- No Existed; Did not have a coin minted
The first issue is that there are infinite people who never existed and did not have a coin made. If I narrow it to historic figures who turned out not to exist and did not have a coin made it becomes possible but also becomes subjective as to whether someone actually thought they existed. For example, did people believe the Minotaur existed?
Perhaps I should choose another filter instead of historic figure, like humans that existed. But picking and choosing the category is again so subjective. Someone may also argue that woman inequality back then was so great that the data should only look at men, as a woman’s chance of being portrayed on a coin was skewed in a way that isn’t applicable to men.
I hope i have successfully communicated the problem i am grappling with and what i want to use it for. If not, please ask for clarifications. A friend in academia suggested that this touches on a problem with Bayes priors that has not been settled. If that is the case, is there any suggested resources for a novice with limited free time, to start to explore the issue? References to books or other online resources or even somewhere else I should be posting this kind of question would all be gratefully received. Not to mention a direct answer in the comments!
Open Thread, Aug. 8 - Aug 14. 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Open Thread, Aug. 1 - Aug 7. 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Street Epistemology - letting people name their thinking errors
https://www.youtube.com/watch?v=Exmjlc4PfEQ
Anthony Magnabosco does what he calls Street Epistemology, usually applying it to supernatural (usually religious) beliefs.
The great thing about his method (and his manner, guy's super personable) is that he avoids the social structure of a debate, of two people arguing, of a zero-sum one game where person wins at the other's loss.
I've struggled with trying to figure out how to let people save face in disputes (when they're making big, awful mistakes), even considering including minor errors (that don't affect the main point) in my arguments so that they could point them out and we could both admit we were wrong (in their case, about things which do affect the main point) and move on.
But this guy's technique manages to invite people to correct their own errors (people are SOOOO much more rational when they're not defensive) and they DO it. No awkwardness, no discomfort, and people pointing out the flaws in their own arguments, and then THANKING him for the talk afterwards and referring him to their friends to talk. Even though they just admitted that their cherished beliefs might not deserve the certainty they've been giving them.
This is applied to religion in this video, but this seems to me to be a generally useful method when you confront someone making an error in their thinking. Are you forcing people to swallow their pride a little (over and over) when they talk with you? Get that out, and watch how much more open people can be.
Open thread, Oct. 17 - Oct. 23, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Open thread, Oct. 10 - Oct. 16, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
The map of natural global catastrophic risks
There are many natural global risks. The greatest of these known risks are asteroid impacts and supervolcanos.
Supervolcanos seem to pose the highest risk, as we sit on the ocean of molten iron, oversaturated with dissolved gases, just 3000 km below surface and its energy slowly moving up via hot spots. Many past extinctions are also connected with large eruptions from supervolcanos.
Impacts also pose a significant risk. But, if we project the past rate of large extinctions due to impacts into the future, we will see that they occur only once in several million years. Thus, the likelihood of an asteroid impact in the next century is an order of magnitude of 1 in 100 000. That is negligibly small compared with the risks of AI, nanotech, biotech, etc.
The main natural risk is a meta-risk. Are we able to correctly estimate natural risks rates and project them into the future? And also, could we accidentally unleash natural catastrophe which is long overdue?
There are several reasons for possible underestimation, which are listed in the right column of the map.
1. Anthropic shadow that is survival bias. This is a well-established idea by Bostrom, but the following four ideas are mostly my conclusions from it.
2. It is also the fact that we should find ourselves at the end of period of stability for any important aspect of our environment (atmosphere, sun stability, crust stability, vacuum stability). It is true if the Rare Earth hypothesis is true and our conditions are very unique in the universe.
3. From (2) is following that our environment may be very fragile for human interventions (think about global warming). Its fragility is like fragility of an overblown balloon poked by small needle.
4. Also, human intelligence was best adaptation instrument during the period of intense climate changes, which quickly evolved in an always changing environment. So, it should not be surprising that we find ourselves in a period of instability (think of Toba eruption, Clovis comet, Young drias, Ice ages) and in an unstable environment, as it help general intelligence to evolve.
5. Period of changes are themselves marks of the end of stability periods for many process and are precursors for larger catastrophes. (For example, intermittent ice ages may precede Snow ball Earth, or smaller impacts with comets debris may precede an impact with larger remnants of the main body).
Each of these five points may raise the probability of natural risks by order of magnitude in my opinion, which combined will result in several orders of magnitude, which seems to be too high and probably is "catastrophism bias".
(More about it is in my article “Why anthropic principle stopped to defend us” which needs substantial revision)
In conclusion, I think that when studying natural risks, a key aspect we should be checking is the hypothesis that we live in non-typical period in a very fragile environment.
For example, some scientists think that 30 000 years ago, a large Centaris comet broke into the inner Solar system, split into pieces (including Encke comet and Taurid meteor showers as well as Tunguska body) and we live in the period of bombardment which has 100 times more intensity than average. Others believe that methane hydrates are very fragile and small human warming could result in dangerous positive feed back.
I tried to list all known natural risks (I am interested in new suggestions). I divided them into two classes: proven and speculative. Most speculative risks are probably false.
Most probable risks in the map are marked red. My crazy ideas are marked green. Some ideas come from obscure Russian literature. For example, an idea, that hydro carbonates could be created naturally inside Earth (like abiogenic oil) and large pockets of them could accumulate in the mantle. Some of them could be natural explosives, like toluene, and they could be cause of kimberlitic explosions. http://www.geokniga.org/books/6908 While the fact of kimberlitic explosion is well known and their energy is like impact of kilometer sized asteroids, I never read about contemporary risks of such explosions.
The pdf of the map is here: http://immortality-roadmap.com/naturalrisks11.pdf

Isomorphic agents with different preferences: any suggestions?
In order to better understand how AI might succeed and fail at learning knowledge, I'll be trying to construct models of limited agents (with bias, knowledge, and preferences) that display identical behaviour in a wide range of circumstance (but not all). This means their preferences cannot be deduced merely/easily from observations.
Does anyone have any suggestions for possible agent models to use in this project?
Stupid Questions September 2016
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
Not all theories of consciousness are created equal: a reply to Robert Lawrence Kuhn's recent article in Skeptic Magazine [Link]
I found this article on the Brain Preservation Foundation's blog that covers a lot of common theories of consciousness and shows how they kinna miss the point when it comes to determining if certain folks should or should not upload our brains if given the opportunity.
Hence I see no reason to agree with Kuhn’s pessimistic conclusions about uploading even assuming his eccentric taxonomy of theories of consciousness is correct. What I want to focus on in the reminder of this blog is challenging the assumption that the best approach to consciousness is tabulating lists of possible theories of consciousness and assuming they each deserve equal consideration (much like the recent trend in covering politics to give equal time to each position regardless of any empirical relevant considerations). Many of the theories of consciousness on Kuhn’s list, while reasonable in the past, are now known to be false based on our best current understanding of neuroscience and physics (specifically, I am referring to theories that require mental causation or mental substances). Among the remaining theories, some of them are much more plausible than others.
September 2016 Media Thread
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
Rules:
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please post only under one of the already created subthreads, and never directly under the parent media thread.
- Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
- Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.
Causal graphs and counterfactuals
Problem solved: Found what I was looking for in: An Axiomatic Characterization Causal Counterfactuals, thanks to Evan Lloyd.
Basically, making every endogenous variable a deterministic function of the exogenous variables and of the other endogenous variables, and pushing all the stochasticity into the exogenous variables.
Old post:
A problem that's come up with my definitions of stratification.
Consider a very simple causal graph:

In this setting, A and B are both booleans, and A=B with 75% probability (independently about whether A=0 or A=1).
I now want to compute the counterfactual: suppose I assume that B=0 when A=0. What would happen if A=1 instead?
The problem is that P(B|A) seems insufficient to solve this. Let's imagine the process that outputs B as a probabilistic mix of functions, that takes the value of A and outputs that of B. There are four natural functions here:
- f0(x) = 0
- f1(x) = 1
- f2(x) = x
- f3(x) = 1-x
Then one way of modelling the causal graph is as a mix 0.75f2 + 0.25f3. In that case, knowing that B=0 when A=0 implies that P(f2)=1, so if A=1, we know that B=1.
But we could instead model the causal graph as 0.5f2+0.25f1+0.25f0. In that case, knowing that B=0 when A=0 implies that P(f2)=2/3 and P(f0)=1/3. So if A=1, B=1 with probability 2/3 and B=1 with probability 1/3.
And we can design the node B, physically, to be one or another of the two distributions over functions or anything in between (the general formula is (0.5+x)f2 + x(f3)+(0.25-x)f1+(0.25-x)f0 for 0 ≤ x ≤ 0.25). But it seems that the causal graph does not capture that.
Owain Evans has said that Pearl has papers covering these kinds of situations, but I haven't been able to find them. Does anyone know any publications on the subject?
Opportunities and Obstacles for Life on Proxima b
This is from the foundation that put out the announcement, Pale Red Dot.
A lot of difficulties, but the best thing put forward, is that if an earthlike planet is circling the closest star, that they should be relatively common.
https://palereddot.org/opportunities-and-obstacles-for-life-on-proxima-b/
And the Breakthru Starshot meeting just over, and this system is still a good target, but not the only one.
http://www.centauri-dreams.org/?p=36265
and they did some modeling of the dust abrasion on the wafer probes, most won't make it.
Open Thread, Aug. 22 - 28, 2016
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Inverse cryonics: one weird trick to persuade anyone to sign up for cryonics today!
OK, slight disclaimer, this is a bit of a joke article inspired by me watching a few recent videos and news reports about cryonics. Nevertheless, there is a serious side to it.
Many people claim that it is irrational to sign up for cryonics, and getting into the nitty gritty with them about how likely it is to work seems to turn into a series of small skirmishes with no particular "win condition". Opponents will not say,
"OK, I will value my life at $X and if you can convince me that (cryonics success probability)*$X is greater than the $1/day fee, I will concede the argument".
Rather, they will retreat to a series of ever harder to falsify positions, usually ending up at a position which is so vague that it is basically pure mood affiliation and acts as a way to stop the conversation rather than as a true objection. I have seen it many times with friends.
So, I propose that before you debate someone about cryonics, you should first try to sign then up for inverse cryonics. Inverse cryonics is a very simple procedure, fully scientifically tested that anyone can sign up for today, as long as they have a reasonably well-off benefactor to take the "other side" of the bet. Let me explain.
The inverse cryonics patient takes a simple revolver with 6 barrels, with one bullet loaded and spins the barrel on the gun, then shoots themselves once in the head1. If the inverse cryonaut is unlucky enough to shoot themselves with a barrel containing a real bullet, they will blow their brains out and die instantly and permanently. However, if they are lucky, the benefactor must pay them $1 per day for the rest of their lives.
Obviously you can vary the risk, rewards and timings of inverse cryonics. The death event could be postponed for 20 years, the risk could be cranked up or down, and the reward could be increased or decreased or paid out as a future discounted lump sum. The key is that signing up for inverse cryonics should be mathematically identical to not signing up for cryonics.
As a baseline, cryonics seems to cost ~$1/day for the rest of your life in order to avoid a ~1/10 chance of dying2. Most people3 would not play ~10-barrel Russian Roulette for a $1/day stipend, even with delayed death or an instant ~$50k payout.
In fact,
- if you believe that cryonics costs ~$1/day for the rest of your life in order to avoid a ~1/10 chance of dying4 and
- you are offered 11-barrel Russian roulette for that same ~$1/day as a stipend, or even an instant $50k payout
- as a rational agent you shouldn't refuse both offers
Of course, I'm sure opponents of cryonics won't bite this particular bullet, but at the very least it may provide an extra intuition pump to move people away from objecting to cryonics because it's the "risky" option.
Comments and criticisms welcome.

1. Depending on the specific deal, more than six barrels could be used, or several identical guns could be used where only one barrel from one gun contains a real bullet, allowing one to achieve a reasonable range of probabilities for "losing" at inverse cryonics from 1 in 6 to perhaps one in 60 with ten guns.
2. And pushing the probability of cryonics working down much further seems to be very hard to defend scientifically, not that people haven't tried. It becomes especially hard when you assume that the cryonics organizations stick around for ~40 years, and society sticks around without major disruptions in order for a young potential cryonaut who signs up today to actually pay their life insurance fees every day until they die.
3. Most intelligent, sane, relatively well-off people in the developed world, i.e. the kind of people who reject cryonics.
4. And you believe that the life you miss out on in the future will be as good, or better than, the life you are about to live from today until your natural death at a fixed age of, say, 75.
Darknet Mining for Proactive Cybersecurity Threat Intelligence
They are using machine learning to comb the darknets, capturing about 300 threats a week.
About 90% hack application and backdoor recognition, that is for sale, and about 80% hacker forum vulnerability identification.
"These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack"
Motivated Thinking
I'm playing around with an article on Motivated Cognition for general consumption
I think it's one of the most important things to teach someone about rationality (any other suggestions? Confirmation bias, placebo, pareidolia, and the odds of coincidences come to mind...)
So, I've taken the five kinds of motivated cognition I know of
(Motivated skepticism)
(Motivated stopping)
(Motivated neutrality)
(Motivated credulity)
(Motivated continuation)
added a counterpart to "neutrality," and then renamed neutrality.
The end result being six kinds of motivated cognition, three pairs of two kinds each, which are opposites of each other. Also, each pair has one kind that beings with an S and the other that begins with a C, which is good for mnemonic purposes.
So, I've got
Stopping and Continuation - Controls WHICH arguments you put in front of yourself (Do you continue because you haven't found what supports you yet, or do you stop because you have?)
Self-deprecation and Conceit - these control WHETHER you judge an argument in front of you (Do you refuse to judge ("Who am I to judge?") clear arguments that oppose your side or do you judge arguments you have no capacity to understand (the probability of abiogenesis, for example) because it lets you support your side?)
Skepticism and Credulity - Controls HOW you judge arguments (Do you demand higher evidence for ideas you don't like, and less for ideas you do? Do you scrutinize ideas you don't like more than ideas you do? Do you ask if the evidence forces you to accept, or if it allows you to accept an idea?)
I'm thinking of introducing them in that order, too, with the "Which/Whether/How you judge" abstraction.
Anybody see better abstractions, better explanations, better mnemonic techniques? Any advice of any kind on how to teach this effectively to people? Other fundamentals to rationality? (Maybe the beliefs as probabilities idea?)
[Effective Altruism] Promoting Effective Giving at Conferences via Speed Giving Games
Conferences provide a high-impact opportunity to promote effective giving. This is the broad take-away from an experiment in promoting effective giving at two conferences in recent months: the Unitarian Universalist (UU) General Assembly and the Secular Student Alliance (SSA) National Convention. This was an experiment run by Intentional Insights (InIn), an EA meta-charity devoted to promoting effective giving and rational thinking to a broad audience, with financial sponsorship from The Life You Can Save.
The outcomes, as detailed below, suggest that conferences can offer cost-effective opportunities to communicate effective giving messages to important stakeholders. An especially promising way to do so is to use Speed Giving Games (SGG) as a low-threshold strategy since recent findings show GGs are an excellent means of promoting effective giving. This encourages participants to self-organize full-length Giving Games (GG) when they return back to their homes.
This article aims both to describe our experiences at UU and SSA and to serve as a guide to others who want to adopt these approaches to promote effective giving via conferences. The article is thus divided into several parts:
-
Evaluating the demographic group you want to target;
-
Evaluating the potential impact and cost of the conference;
-
Steps to prepare for the conference;
-
Outcomes of the conference;
-
Assessment of the experiment and conclusions;
Picking the Right Conference: Consider Demographics
Before deciding on a conference, make sure you target the right demographic. We at InIn, in agreement with The Life You Can Save, picked the two conferences mentioned above for a couple of reasons.
First, the UU and SSA both unite people who we thought were well-suited for promoting effective giving. Members of these organizations already put a considerable value both on improving the world, and on using reason and evidence to inform their actions in doing so.
Our work at SSA is part of our broader effort, in collaboration with The Life You Can Save and the Local Effective Altruist Network, to promote effective giving to secular, humanist, and skeptic groups. We do so by holding GGs targeted to their needs: appearing on podcasts, writing articles in secular venues about effective giving, and collaborating with a number of national and international common-interest organizations. Besides the SSA, this includes the Foundation Beyond Belief, United Coalition of Reason, American Humanist Association, International Humanist Ethical Union, and others.
The UU religious denomination is a more experimental focus group. It builds upon the success of the above-mentioned project, and expands to promote effective giving to people who are still somewhat reason-oriented, even if reason is less central for them. Yet UU members are strongly committed to action to improve the world, and generally show more active efforts on the social justice and civic engagement front than members of the secular, humanist, and skeptic movement. Thus, we at InIn and The Life You Can Save decided to target them as well.
Second, picking the right demographic also means having at least some people who are familiar with the language, needs, desires, and passions of the niche group you are targeting, and have some connections within it. Knowing the interests and language of the demographics is really valuable for understanding how to frame the concept of effective giving to those demographics. Having people with pre-existing connections and networks within that demographic allows you to approach them as an insider, giving you instant credibility and much more leverage when introducing the audience to an unfamiliar concept.
For the SSA, we had it easy, due to our extensive connections in the secular/skeptic/humanist movement. The SSA Executive Director is on the Intentional Insights Advisory Board, our members regularly appear on podcasts and write for venues within that movement, and many of our members attend local humanist/secular/skeptic groups.
We had fewer connections in UU, but the ones that we did have were sufficient. Our two co-founders and some of our members attend UU churches. Intentional insights creates curriculum content for the UU movement, appears on relevant podcasts and writes for major venues. This proved to be more than enough familiarity from the perspective of knowing the language and interests.
Picking the Right Conference: Consider Impact and Costs
After choosing the right demographic, consider and balance the potential impact and effectiveness of each conference.
Number and influence of attendees:
Both the UU and the secular/skeptic/humanist movements hold a number of conferences. Fortunately, a single annual conference unites the whole UU movement, with over 3,500 UU leaders from around the world coming. Moreover, the people who come to the UU General Assembly constitute the most active members of the movement – Ministers, Religious Education Directors, church staff, lay leaders and prominent writers – in other words, those stakeholders most capable of spreading effective giving ideas into the UU community.
The SSA event had far fewer people, with just over 200 attendees. However, many movers and shakers from the secular/skeptic/humanist movement attend the conference. This makes it attractive from the perspective of spreading effective giving ideas in the movement.
Impact of your role at conference:
First, most conferences have tabling opportunities for exhibitors, and as an exhibitor, you can hold SGGs at your table. We did that both at the SSA and UU, and I doubt we would have gone to either without that opportunity, since we found it to be very effective at promoting effective giving.
Caption: Intentional Insights table at the Secular Student Alliance conference (courtesy of InIn)
Second, if you have an opportunity to be a speaker and can promote effective giving at your talk, this raises the impact you can make at a conference. That said, unless you can focus your talk on effective giving or at least give out relevant materials and sign-up sheets, simply mentioning effective giving may not be that impactful. It all depends on how you go about it, and whether the concept is relevant to your talk and memorable to the audience. I was a speaker at the SSA, and worked effective giving into my talk without focusing on it, as well as distributed relevant materials about effective giving.
Third, consider whether you have specific networking opportunities at a conference that are helpful for promoting effective giving. For instance, this might involve having small-group or one-on-one meetings with influencers where you can safely promote effective giving without seeming pushy. At both the SSA and UU, we had both pre-scheduled and spontaneous meetings with notable people, which allowed us to promote effective giving concepts.
Costs: One of the fundamental aspects of effective giving is cost-effectiveness, and it is important to apply this metric to marketing effective giving, as well.
For the experiment with promoting effective giving at conferences, we at InIn decided to collaborate with The Life You Can Save on the most low-cost opportunities. Thus, one of the reasons we chose the UU and SSA conventions is that they both happened in Columbus, where InIn is based. InIn provided the people who ran the table and did the networking, and The Life You Can Save covered fees for conference registration, tabling, and other miscellaneous fees.
The UUA conference registration is around $450 per participant, and $800 for a table. Fortunately, as InIn is a member of a UU organization through which we promote Giving Games and other InIn materials, we were able to use a table at a discount, for $200. Miscellaneous fees included parking and food, for around $20 per participant per day. We had 2 people at the conference each day, so for the 5-day conference, that was $200. We also had about $175 in marketing costs to design and print flyers. We registered only one person, as we got one free participant with a table, so the total cost came down to $1025.
The SSA conference registration fee is around $135 per participant, and $150 for a table. As a speaker, I got a free registration, and another free registration accompanied the table. Parking and food cost $140 for the 3-day conference, and marketing costs came out to $150, for a total of $340.
Prepare Well
To prepare for the conferences, we at InIn brainstormed about the appropriate ways to present effective giving at both conferences. We then prepared talking points relevant to each audience, and coordinated with all people who would table at both conferences to ensure they knew how to present effective giving to the two audiences well.
As an example, you can see the GGs packet adapted to the language and interests of the SSA here and UU here. The main modifications are in the “Activity Overview” section, and these changes represent the broad difference in the kind of language we used.
Besides the language, we put a lot of effort into designing attractive marketing materials for our table. We created a large sign, visible from a long distance, with “Free Money” in red. People are attracted both to the color red and to the phrase “Free Money,” and it is highly important to draw attention in the context of a busy conference.
Caption: SGG activity overview for both UU and SSA conferences (courtesy of InIn)
We hired a professional designer to compose an attractive layout for the SGG activity at our table. SGGs involve having people make a decision between two charities. Their vote results in a dollar each going to either charity, sponsored by an outside party, usually The Life You Can Save. It was important to create a nice layout that people could engage with quickly and easily, again due to distractions in the conference setting. We chose GiveDirectly as the effective charity, and the Mid-Ohio Food Bank as a local and not so effective charity.
For those who participated in SGGs, then aimed at getting them to sign up for the InIn newsletter and The Life You Can Save newsletter, and engaging with them in conversations about effective giving. We also printed out shorter versions of the UU and SSA Giving Games packets. These had brief descriptions of the full Giving Games, with links to the longer versions they could host back in their SSA student clubs or UU congregations.
Another thing we did is schedule meetings in advance with some influencers to discuss effective giving opportunities. We also made sure to schedule meetings spontaneously during the conference with notables who seemed interested in effective giving. For those who expressed an interest but did not have time to meet, we made sure to exchange contact information and follow up afterwards.
Finally, we applied to be speakers at both conferences. We succeeded with the SSA, but not with UU. Still, we decided to attend the UU conference, because the costs were low enough since we did not have to travel and The Life You Can Save judged the potential impact worthwhile.
Conference Outcomes
At the UU conference, we had around 75 people play the SGG, so around 2% of attendees. Of those, about 65% (just under 50 people) signed up for the newsletter. We had 50 packets with GG descriptions printed, and we ran out by the end of the conference. Additionally, about 70% of the people who played there voted for GiveDirectly.
We also had meetings with some notable parties interested in effective giving. Especially promising was a meeting with the Executive Director of the Unitarian Universalist Humanist Association (UUHA), who expressed a strong interest in bringing GGs to her constituents. There are hundreds of UU Humanist groups within congregations around the world. We are currently working on testing a GG at a local UU Humanist group, and we will then write up the results for the UUHA blog. We had some other promising meetings as well, but no one was as interested as the UUHA.
At the SSA conference, we had 15 people play the SGG, so around 7.5% of attendees. Of those, 80% signed up for the newsletter, so about 12 people. The same proportion, 80%, voted for GiveDirectly.
We gave away around 35 GG packets with descriptions, as some people did not want to play the SGG, but were interested in having their clubs host it. Distributing packets was especially helped by the fact that I was a speaker at the SSA, and promoted and handed out packets at my presentation.
The meetings with notable parties proved more promising at the SSA. We met with staff from two national secular organizations, the American Ethical Union and the Center for Inquiry, who expressed an interest in promoting GGs to their members. A number of influencers expressed enthusiasm over the concept of effective giving, and wanted to promote it broadly in the secular/skeptic/humanist movement.
Assessment and Conclusion
We would have been satisfied at both conferences to have at least half of the people who played the SGG vote for GiveDirectly and have half the people sign up. We ended up with 70% voting for GiveDirectly at UU and 80% at SSA, and 65% signing up for the newsletter at UU and 80% at the SSA. So, these conferences strongly exceeded our baseline expectations. We did not have specific expectations for giving away packets or meetings with notables. Yet looking back, we certainly did not expect the level of interest we got for conference participants holding Giving Games back home - we would have printed more packets for the UU had we thought they might run out.
The evidence from GGs shows they are a great method to promote effective giving. Getting influencers from target demographics engaged with GGs not only gets the activists to give more effectively, but also encourages the activists to hold GGs back at their groups.
After all, holding GGs is a win-win for secular/skeptic/humanist groups and UU congregations alike. They get to engage in an activity that embodies their values of using reason and evidence. At the same time, they get to improve the world and build a sense of community without spending a penny.
For those of us promoting effective giving, it presents these ideas to a new audience, and enables the audience to continue engaging if they wish. The newsletter sign-ups are especially indicative of people’s interests. So are the numbers of people who took packets to host GGs back at their groups. We at InIn already heard from several people who are arranging Giving Games after being exposed to the adapted GG packets, including a UU church that is arranging to have a GG for all 500 members of the church. Based on these outcomes, we at InIn and The Life You Can Save decided it would be even worthwhile to invest into traveling to distant conferences given the right conditions - having a table, speaking role, potential influencers, etc.
So, consider promoting effective giving at conferences to audiences not directly related to existing effective altruism communities. Hopefully, the steps I outlined above will help you decide on the best opportunities to do so. I would be glad to chat with you about specifics and share more details; email me at gleb@intentionalinsights.org.
Acknowledgments: For feedback on earlier stages of this draft, my gratitude to Jon Behar, Laura Gamse, Ryan Carey, Malcolm Ocean, Matthijs Maas, Yaacov Tarko, Dony Christie, Jake Krycia, Remmelt Ellen, Alexander Semenychev, Ian Pritchford, Ed Chen, Lune Nekesa, Jo Duyvestyn, and others who wished to remain anonymous.
Open thread, Jul. 25 - Jul. 31, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
The map of agents which may create x-risks
Recently Phil Torres wrote an article where he raises a new topic in existential risks research: the question about who could be possible agents in the creation of a global catastrophe. Here he identifies five main types of agents, and two main reasons why they will create a catastrophe (error and terror).
He discusses the following types of agents:
(1) Superintelligence.
(2) Idiosyncratic actors.
(3) Ecoterrorists.
(4) Religious terrorists.
(5) Rogue states.
Inspired by his work I decided to create a map of all possible agents as well as their possible reasons for creating x-risks. During this work some new ideas appeared.
I think that a significant addition to the list of agents should be superpowers, as they are known to have created most global risks in the 20th century; corporations, as they are now on the front line of AGI creation; and pseudo-rational agents who could create a Doomsday weapon in the future to use for global blackmail (may be with positive values), or who could risk civilization’s fate for their own benefits (dangerous experiments).
The X-risks prevention community could also be an agent of risks if it fails to prevent obvious risks, or if it uses smaller catastrophes to prevent large risks, or if it creates new dangerous ideas of possible risks which could inspire potential terrorists.
The more technology progresses, the more types of agents will have access to dangerous technologies, even including teenagers. (like: "Why This 14-Year-Old Kid Built a Nuclear Reactor” )
In this situation only the number of agents with risky tech will matter, not the exact motivations of each one. But if we are unable to control tech, we could try to control potential agents or their “medium" mood at least.
The map shows various types of agents, starting from non-agents, and ending with types of agential behaviors which could result in catastrophic consequences (error, terror, risk etc). It also shows the types of risks that are more probable for each type of agent. I think that my explanation in each case should be self evident.
We could also show that x-risk agents will change during the pace of technological progress. In the beginning there are no agents, and later there are superpowers, and then smaller and smaller agents, until there will be millions of people with biotech labs at home. In the end there will be only one agent - SuperAI.
So, a lessening the number of agents, and increasing their ”morality” and intelligence seem to be the most plausible directions in lowering risks. Special organizations or social networks may be created to control the most risky type of agents. Differing agents probably need differing types of control. Some ideas of this agent-specific control are listed in the map, but a real control system should be much more complex and specific.
The map shows many agents, some of them real and exist now (but don’t have dangerous capabilities), and some are only possible in moral sense or in technical sense.
So there are 4 types of agents, and I show them in the map in different colours:
1) Existing and dangerous, that is already having technology to destroy the humanity. That is superpowers, arrogant scientists – Red
2) Existing, and willing to end the world, but lacking needed technologies. (ISIS, VHEMt) - Yellow
3) Morally possible, but don’t existing. We could imagine logically consistent value systems which may result in human extinction. That is Doomsday blackmail. - Green
4) Agents, which will pose risk only after supertechnologies appear, like AI-hackers, children biohackers. - Blue
Many agents types are not fit for this classification so I rest them white in the map.
The pdf of the map is here: http://immortality-roadmap.com/agentrisk11.pdf
(The jpg of the map is below because side bar is closing part of it I put it higher)
(The jpg of the map is below because side bar is closing part of it I put it higher)

New Philosophical Work on Solomonoff Induction
I don't know to what extent MIRI's current research engages with Solomonoff induction, but some of you may find recent work by Tom Sterkenburg to be of interest. Here's the abstract of his paper Solomonoff Prediction and Occam's Razor:
Algorithmic information theory gives an idealised notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam's razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. It is this assumption that is the characterising element of Solomonoff prediction and wherein its philosophical interest lies.
We have the technology required to build 3D body scanners for consumer prices
Apple's iPhone 7 Plus decided to add another lense to be able to make better pictures. Meanwhile Walabot who started with wanting to build a breast cancer detection technology released a 600$ device that can look 10cm into walls. Thermal imaging also got cheaper.
I think it would be possible to build a 1500$ device that could combine those technologies and also add a laser that can shift color. A device like this could bring medicine forward a lot.
A lot of area's besides medicine could likely also profit from a relatively cheap 3D scanner that can look inside objects.
Developing it would require Musk-level capital investments but I think it would advance medicine a lot if a company would both provide the hardware and develop software to make the best job possible at body scanning.
Open thread, Sep. 26 - Oct. 02, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Seven Apocalypses
0: Recoverable Catastrophe
An apocalypse is an event that permanently damages the world. This scale is for scenarios that are much worse than any normal disaster. Even if 100 million people die in a war, the rest of the world can eventually rebuild and keep going.
1: Economic Apocalypse
The human carrying capacity of the planet depends on the world's systems of industry, shipping, agriculture, and organizations. If the planet's economic and infrastructural systems were destroyed, then we would have to rely on more local farming, and we could not support as high a population or standard of living. In addition, rebuilding the world economy could be very difficult if the Earth's mineral and fossil fuel resources are already depleted.
2: Communications Apocalypse
If large regions of the Earth become depopulated, or if sufficiently many humans die in the catastrophe, it's possible that regions and continents could be isolated from one another. In this scenario, globalization is reversed by obstacles to long-distance communication and travel. Telecommunications, the internet, and air travel are no longer common. Humans are reduced to multiple, isolated communities.
3: Knowledge Apocalypse
If the loss of human population and institutions is so extreme that a large portion of human cultural or technological knowledge is lost, it could reverse one of the most reliable trends in modern history. Some innovations and scientific models can take millennia to develop from scratch.
4: Human Apocalypse
Even if the human population were to be violently reduced by 90%, it's easy to imagine the survivors slowly resettling the planet, given the resources and opportunity. But a sufficiently extreme transformation of the Earth could drive the human species completely extinct. To many people, this is the worst possible outcome, and any further developments are irrelevant next to the end of human history.
5: Biosphere Apocalypse
In some scenarios (such as the physical destruction of the Earth), one can imagine the extinction not just of humans, but of all known life. Only astrophysical and geological phenomena would be left in this region of the universe. In this timeline we are unlikely to be succeeded by any familiar life forms.
6: Galactic Apocalypse
A rare few scenarios have the potential to wipe out not just Earth, but also all nearby space. This usually comes up in discussions of hostile artificial superintelligence, or very destructive chain reactions of exotic matter. However, the nature of cosmic inflation and extraterrestrial intelligence is still unknown, so it's possible that some phenomenon will ultimately interfere with the destruction.
7: Universal Apocalypse
This form of destruction is thankfully exotic. People discuss the loss of all of existence as an effect of topics like false vacuum bubbles, simulationist termination, solipsistic or anthropic observer effects, Boltzmann brain fluctuations, time travel, or religious eschatology.
The goal of this scale is to give a little more resolution to a speculative, unfamiliar space, in the same sense that the Kardashev Scale provides a little terminology to talk about the distant topic of interstellar civilizations. It can be important in x risk conversations to distinguish between disasters and truly worst-case scenarios. Even if some of these scenarios are unlikely or impossible, they are nevertheless discussed, and terminology can be useful to facilitate conversation.
A Weird Trick To Manage Your Identity
I’ve always been uncomfortable being labeled “American.” Though I’m a citizen of the United States, the term feels restrictive and confining. It obliges me to identify with aspects of the United States with which I am not thrilled. I have similar feelings of limitation with respect to other labels I assume. Some of these labels don’t feel completely true to who I truly am, or impose certain perspectives on me that diverge from my own.
These concerns are why it's useful to keep one's identity small, use identity carefully, and be strategic in choosing your identity.
Yet these pieces speak more to System 1 than to System 2. I recently came up with a weird trick that has made me more comfortable identifying with groups or movements that resonate with me while creating a System 1 visceral identity management strategy. The trick is to simply put the word “weird” before any identity category I think about.
I’m not an “American,” but a “weird American.” Once I started thinking about myself as a “weird American,” I was able to think calmly through which aspects of being American I identified with and which I did not, setting the latter aside from my identity. For example, I used the term “weird American” to describe myself when meeting a group of foreigners, and we had great conversations about what I meant and why I used the term. This subtle change enables my desire to identify with the label “American,” but allows me to separate myself from any aspects of the label I don’t support.
Beyond nationality, I’ve started using the term “weird” in front of other identity categories. For example, I'm a professor at Ohio State. I used to become deeply frustrated when students didn’t prepare adequately for their classes with me. No matter how hard I tried, or whatever clever tactics I deployed, some students simply didn’t care. Instead of allowing that situation to keep bothering me, I started to think of myself as a “weird professor” - one who set up an environment that helped students succeed, but didn’t feel upset and frustrated by those who failed to make the most of it.
I’ve been applying the weird trick in my personal life, too. Thinking of myself as a “weird son” makes me feel more at ease when my mother and I don’t see eye-to-eye; thinking of myself as a “weird nice guy,” rather than just a nice guy, has helped me feel confident about my decisions to be firm when the occasion calls for it.
So, why does this weird trick work? It’s rooted in strategies of reframing and distancing, two research-based methods for changing our thought frameworks. Reframing involves changing one’s framework of thinking about a topic in order to create more beneficial modes of thinking. For instance, in reframing myself as a weird nice guy, I have been able to say “no” to requests people make of me, even though my intuitive nice guy tendency tells me I should say “yes.” Distancing refers to a method of emotional management through separating oneself from an emotionally tense situation and observing it from a third-person, external perspective. Thus, if I think of myself as a weird son, I don’t have nearly as much negative emotions during conflicts with my mom. It enables me to have space for calm and sound decision-making.
Thinking of myself as "weird" also applies to the context of rationality and effective altruism for me. Thinking of myself as a "weird" aspiring rationalist and EA helps me be more calm and at ease when I encounter criticisms of my approach to promoting rational thinking and effective giving. I can distance myself from the criticism better, and see what I can learn from the useful points in the criticism to update and be stronger going forward.
Overall, using the term “weird” before any identity category has freed me from confinements and restrictions associated with socially-imposed identity labels and allowed me to pick and choose which aspects of these labels best serve my own interests and needs. I hope being “weird” can help you manage your identity better as well!
Open thread, Sep. 19 - Sep. 25, 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)