Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Quantified Risks of Gay Male Sex

20 pianoforte611 18 August 2014 11:55PM

If you are a gay male then you’ve probably worried at one point about sexually transmitted diseases. Indeed men who have sex with men have some of the highest prevalence of many of these diseases. And if you’re not a gay male, you’ve probably still thought about STDs at one point. But how much should you worry? There are many organizations and resources that will tell you to wear a condom, but very few will tell you the relative risks of wearing a condom vs not. I’d like to provide a concise summary of the risks associated with gay male sex and the extent to which these risks can be reduced. (See Mark Manson’s guide for a similar resources for heterosexual sex.). I will do so by first giving some information about each disease, including its prevalence among gay men. Most of this data will come from the US, but the US actually has an unusually high prevalence for many diseases. Certainly HIV is much less common in many parts of Europe. I will end with a case study of HIV, which will include an analysis of the probabilities of transmission broken down by the nature of sex act and a discussion of risk reduction techniques.

When dealing with risks associated with sex, there are few relevant parameters. The most common is the prevalence – the proportion of people in the population that have the disease. Since you can only get a disease from someone who has it, the prevalence is arguably the most important statistic. There are two more relevant statistics – the per act infectivity (the chance of contracting the disease after having sex once) and the per partner infectivity (the chance of contracting the disease after having sex with one partner for the duration of the relationship). As it turns out the latter two probabilities are very difficult to calculate. I only obtained those values for for HIV. It is especially difficult to determine per act risks for specific types of sex acts since many MSM engage in a variety of acts with multiple partners. Nevertheless estimates do exist and will explored in detail in the HIV case study section.

HIV

Prevalence: Between 13 - 28%. My guess is about 13%.

The most infamous of the STDs. There is no cure but it can be managed with anti-retroviral therapy. A commonly reported statistic is that 19% of MSM (men who have sex with men) in the US are HIV positive (1). For black MSM, this number was 28% and for white MSM this number was 16%. This is likely an overestimate, however, since the sample used was gay men who frequent bars and clubs. My estimate of 13% comes from CDC's total HIV prevalence in gay men of 590,000 (2) and their data suggesting that MSM comprise 2.9% of men in the US (3).

 

Gonorrhea

Prevalence: Between 9% and 15% in the US

This disease affects the throat and the genitals but it is treatable with antibiotics. The CDC estimates 15.5% prevalence (4). However, this is likely an overestimate since the sample used was gay men in health clinics. Another sample (in San Francisco health clinics) had a pharyngeal gonorrhea prevalence of 9% (5).

 

Syphilis

Prevalence: 0.825% in the US

 My estimate was calculated in the same manner as my estimate for HIV. I used the CDC's data (6). Syphilis is transmittable by oral and anal sex (7) and causes genital sores that may look harmless at first (8). Syphilis is curable with penicillin however the presence of sores increases the infectivity of HIV.

 

Herpes (HSV-1 and HSV-2)

Prevalence: HSV-2 - 18.4% (9); HSV-1 - ~75% based on Australian data  (10)

This disease is mostly asymptomatic and can be transmitted through oral or anal sex. Sometimes sores will appear and they will usually go away with time. For the same reason as syphilis, herpes can increase the chance of transmitting HIV. The estimate for HSV-1 is probably too high. Snowball sampling was used and most of the men recruited were heavily involved in organizations for gay men and were sexually active in the past 6 months. Also half of them reported unprotected anal sex in the past six months. The HSV-2 sample came from a random sample of US households (11).

 

Clamydia

Prevalence: Rectal - 0.5% - 2.3% ; Pharyngeal - 3.0 - 10.5% (12)

 Like herpes, it is often asymptomatic - perhaps as low as 10% of infected men report symptoms. It is curable with antibiotics.

 

HPV

Prevalence: 47.2% (13)

 This disease is incurable (though a vaccine exists for men and women) but usually asymptomatic. It is capable of causing cancers of the penis, throat and anus. Oddly there are no common tests for HPV in part because there are many strains (over 100) most of which are relatively harmless. Sometimes it goes away on its own (14). The prevalence rate was oddly difficult to find, the number I cited came from a sample of men from Brazil, Mexico and the US.

 

Case Study of HIV transmission; risks and strategies for reducing risk

 IMPORTANT: None of the following figures should be generalized to other diseases. Many of these numbers are not even the same order of magnitude as the numbers for other diseases. For example, HIV is especially difficult to transmit via oral sex, but Herpes can very easily be transmitted.

Unprotected Oral Sex per-act risk (with a positive partner or partner of unknown serostatus):

Non-zero but very small. Best guess .03% without condom (15)

 Unprotected Anal sex per-act risk (with positive partner): 

Receptive: 0.82% - 1.4% (16) (17)

                          Insertive Circumcised: 0.11% (18)

         Insertive Uncircumcised: 0.62% (18)

 Protected Anal sex per-act risk (with positive partner):  

  Estimates range from 2 times lower to twenty times lower (16)  (19) and the risk is highly dependent on the slippage and   breakage rate.


Contracting HIV from oral sex is very rare. In one study, 67 men reported performing oral sex on at least one HIV positive partner and none were infected (20). However, transmission is possible (15). Because instances of oral transmission of HIV are so rare, the risk is hard to calculate so should be taken with a grain of salt. The number cited was obtained from a group of individuals that were either HIV positive or high risk for HIV. The per act-risk with a positive partner is therefore probably somewhat higher.

 Note that different HIV positive men have different levels of infectivity hence the wide range of values for per-act probability of transmission. Some men with high viral loads (the amount of HIV in the blood) may have an infectivity of greater than 10% per unprotected anal sex act (17).

 

Risk reducing strategies

 Choosing sex acts that have a lower transmission rate (oral sex, protected insertive anal sex, non-insertive) is one way to reduce risk. Monogamy, testing, antiretroviral therapy, PEP and PrEP are five other ways.

 

Testing Your partner/ Monogamy

 If your partner tests negative then they are very unlikely to have HIV. There is a 0.047% chance of being HIV positive if they tested negative using a blood test and a 0.29% chance of being HIV positive if they tested negative using an oral test. If they did further tests then the chance is even lower. (See the section after the next paragraph for how these numbers were calculated).

 So if your partner tests negative, the real danger is not the test giving an incorrect result. The danger is that your partner was exposed to HIV before the test, but his body had not started to make antibodies yet. Since this can take weeks or months, it is possible for your partner who tested negative to still have HIV even if you are both completely monogamous.

 ____

For tests, the sensitivity - the probability that an HIV positive person will test positive - is 99.68% for blood tests (21), 98.03% with oral tests. The specificity - the probability that an HIV negative person will test negative - is 99.74% for oral tests and 99.91% for blood tests. Hence the probability that a person who tested negative will actually be positive is:

 P(Positive | tested negative) = P(Positive)*(1-sensitivity)/(P(Negative)*specificity + P(Positive)*(1-sensitivity)) = 0.047% for blood test, 0.29% for oral test

 Where P(Positive) = Prevalence of HIV, I estimated this to be 13%.

 However, according to a writer for About.com (22) - a doctor who works with HIV - there are often multiple tests which drive the sensitivity up to 99.997%.

 

Home Testing

Oraquick is an HIV test that you can purchase online and do yourself at home. It costs $39.99 for one kit. The sensitivity is 93.64%, the specificity is 99.87% (23). The probability that someone who tested negative will actually be HIV positive is 0.94%. - assuming a 13% prevalence for HIV. The same danger mentioned above applies - if the infection occurred recently the test would not detect it.

 

 Anti-Retroviral therapy

 Highly active anti-retroviral therapy (HAART), when successful, can reduce the viral load – the amount of HIV in the blood - to low or undetectable levels. Baggaley et. al (17) reports that in heterosexual couples, there have been some models relating viral load to infectivity. She applies these models to MSM and reports that the per-act risk for unprotected anal sex with a positive partner should be 0.061%. However, she notes that different models produce very different results thus this number should be taken with a grain of salt.

 

 Post-Exposure Prophylaxis (PEP)

 A last resort if you think you were exposed to HIV is to undergo post-exposure prophylaxis within 72 hours. Antiretroviral drugs are taken for about a month in the hopes of preventing the HIV from infecting any cells. In one case controlled study some health care workers who were exposed to HIV were given PEP and some were not, (this was not under the control of the experimenters). Workers that contracted HIV were less likely to have been given PEP with an odds ratio of 0.19 (24). I don’t know whether PEP is equally effective at mitigating risk from other sources of exposure.

 

 Pre-Exposure Prophylaxis (PrEP)

 This is a relatively new risk reduction strategy. Instead of taking anti-retroviral drugs after exposure, you take anti-retroviral drugs every day in order to prevent HIV infection. I could not find a per-act risk, but in a randomized controlled trial, MSM who took PrEP were less likely to become infected with HIV than men who did not (relative reduction  - 41%). The average number of sex partners was 18. For men who were more consistent and had a 90% adherence rate, the relative reduction was better - 73%. (25) (26).

1: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5937a2.htm?s_cid=mm5937a2_w

2: http://www.cdc.gov/hiv/statistics/basics/ataglance.html

3: http://www.cdc.gov/nchs/data/ad/ad362.pdf

4: http://www.cdc.gov/std/stats10/msm.htm

5: http://cid.oxfordjournals.org/content/41/1/67.short

6: http://www.cdc.gov/std/syphilis/STDFact-MSM-Syphilis.htm

7: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5341a2.htm

8: http://www.cdc.gov/std/syphilis/stdfact-syphilis.htm

9: http://journals.lww.com/stdjournal/Abstract/2010/06000/Men_Who_Have_Sex_With_Men_in_the_United_States_.13.aspx

10: http://jid.oxfordjournals.org/content/194/5/561.full

11: http://www.nber.org/nhanes/nhanes-III/docs/nchs/manuals/planop.pdf

12: http://www.cdc.gov/std/chlamydia/STDFact-Chlamydia-detailed.htm

13: http://jid.oxfordjournals.org/content/203/1/49.short

14: http://www.cdc.gov/std/hpv/stdfact-hpv-and-men.htm

15: http://journals.lww.com/aidsonline/pages/articleviewer.aspx?year=1998&issue=16000&article=00004&type=fulltext#P80

16: http://aje.oxfordjournals.org/content/150/3/306.short

17: http://ije.oxfordjournals.org/content/early/2010/04/20/ije.dyq057.full

18: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2852627/

19:

http://journals.lww.com/stdjournal/Fulltext/2002/01000/Reducing_the_Risk_of_Sexual_HIV_Transmission_.7.aspx

20:

http://journals.lww.com/aidsonline/Fulltext/2002/11220/Risk_of_HIV_infection_attributable_to_oral_sex.22.aspx

21: http://www.thelancet.com/journals/laninf/article/PIIS1473-3099%2811%2970368-1/abstract

22:

http://aids.about.com/od/hivpreventionquestions/f/How-Often-Do-False-Positive-And-False-Negative-Hiv-Test-Results-Occur.htm

23: http://www.ncbi.nlm.nih.gov/pubmed/18824617

24: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD002835.pub3/abstract

25: http://www.nejm.org/doi/full/10.1056/Nejmoa1011205#t=articleResults

26: http://www.cmaj.ca/content/184/10/1153.short

Steelmanning MIRI critics

3 fowlertm 19 August 2014 03:14AM

I'm giving a talk to the Boulder Future Salon in Boulder, Colorado in a few weeks on the Intelligence Explosion hypothesis. I've given it once before in Korea but I think the crowd I'm addressing will be more savvy than the last one (many of them have met Eliezer personally). It could end up being important, so I was wondering if anyone considers themselves especially capable of playing Devil's Advocate so I could shape up a bit before my talk? I'd like there to be no real surprises. 

I'd be up for just messaging back and forth or skyping, whatever is convenient.

The metaphor/myth of general intelligence

8 Stuart_Armstrong 18 August 2014 04:04PM

Thanks for Kaj for making me think along these lines.

It's agreed on this list that general intelligences - those that are capable of displaying high cognitive performance across a whole range of domains - are those that we need to be worrying about. This is rational: the most worrying AIs are those with truly general intelligences, and so those should be the focus of our worries and work.

But I'm wondering if we're overestimating the probability of general intelligences, and whether we shouldn't adjust against this.

First of all, the concept of general intelligence is a simple one - perhaps too simple. It's an intelligence that is generally "good" at everything, so we can collapse its various abilities across many domains into "it's intelligent", and leave it at that. It's significant to note that since the very beginning of the field, AI people have been thinking in terms of general intelligences.

And their expectations have been constantly frustrated. We've made great progress in narrow areas, very little in general intelligences. Chess was solved without "understanding"; Jeopardy! was defeated without general intelligence; cars can navigate our cluttered roads while being able to do little else. If we started with a prior in 1956 about the feasibility of general intelligence, then we should be adjusting that prior downwards.

But what do I mean by "feasibility of general intelligence"? There are several things this could mean, not least the ease with which such an intelligence could be constructed. But I'd prefer to look at another assumption: the idea that a general intelligence will really be formidable in multiple domains, and that one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise.

First of all, humans are very far from being general intelligences. We can solve a lot of problems when the problems are presented in particular, easy to understand formats that allow good human-style learning. But if we picked a random complicated Turing machine from the space of such machines, we'd probably be pretty hopeless at predicting its behaviour. We would probably score very low on the scale of intelligence used to construct the AIXI. The general intelligence, "g", is a misnomer - it designates the fact that the various human intelligences are correlated, not that humans are generally intelligent across all domains.

Humans with computers, and humans in societies and organisations, are certainly closer to general intelligences than individual humans. But institutions have their own blind spots and weakness, as does the human-computer combination. Now, there are various reasons advanced for why this is the case - game theory and incentives for institutions, human-computer interfaces and misunderstandings for the second example. But what if these reasons, and other ones we can come up with, were mere symptoms of a more universal problem: that generalising intelligence is actually very hard?

There are no free lunch theorems that show that no computable intelligences can perform well in all environments. As far as they go, these theorems are uninteresting, as we don't need intelligences that perform well in all environments, just in almost all/most. But what if a more general restrictive theorem were true? What if it was very hard to produce an intelligence that was of high performance across many domains? What if the performance of a generalist was pitifully inadequate as compared with a specialist. What if every computable version of AIXI was actually doomed to poor performance?

There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists (this is my standard mental image/argument for AI risk), you could construct an entity that was very good at programming specific sub-programs, or you could approximate AIXI. But we are making some assumptions here - namely, that we can network together very different intelligences (the human-computer interfaces hints at some of the problems), and that a general programming ability can even exist in the first place (for a start, it might require a general understanding of problems that is akin to general intelligence in the first place). And we haven't had great success building effective AIXI approximations so far (which should reduce, possibly slightly, our belief that effective general intelligences are possible).

Now, I remain convinced that general intelligence is possible, and that it's worthy of the most worry. But I think it's worth inspecting the concept more closely, and at least be open to the possibility that general intelligence might be a lot harder than we imagine.

EDIT: Model/example of what a lack of general intelligence could look like.

Imagine there are three types of intelligence - social, spacial and scientific, all on a 0-100 scale. For any combinations of the three intelligences - eg (0,42,98) - there is an effort level E (how hard is that intelligence to build, in terms of time, resources, man-hours, etc...) and a power level P (how powerful is that intelligence compared to others, on a single convenient scale of comparison).

Wei Dai's evolutionary comment implies that any being of very low intelligence on one of the scale would be overpowered by a being of more general intelligence. So let's set power as simply the product of all three intelligences.

This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns - but we haven't included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.

But is it plausible that those could be of equal difficulty? It could be, if we assume that high social intelligence isn't so difficult, but is specialised. ie you can increase the spacial intelligence of a social intelligence, but that messes up the delicate balance in its social brain. Or maybe recursive self-improvement happens more easily in narrow domains. Further assume that intelligences of different types cannot be easily networked together (eg combining (100,5,5) and (5,100,5) in the same brain gives an overall performance of (21,21,5)). This doesn't seem impossible.

So let's caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.

Open thread, 18-24 August 2014

3 David_Gerard 18 August 2014 04:55PM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

A thought on AI unemployment and its consequences

5 Stuart_Armstrong 18 August 2014 12:10PM

I haven't given much thought to the concept of automation and computer induced unemployment. Others at the FHI have been looking into it in more details - see Carl Frey's "The Future of Employment", which did estimates for 70 chosen professions as to their degree of automatability, and extended the results of this using O∗NET, an online service developed for the US Department of Labor, which gave the key features of an occupation as a standardised and measurable set of variables.

The reasons that I haven't been looking at it too much is that AI-unemployment has considerably less impact that AI-superintelligence, and thus is a less important use of time. However, if automation does cause mass unemployment, then advocating for AI safety will happen in a very different context to currently. Much will depend on how that mass unemployment problem is dealt with, what lessons are learnt, and the views of whoever is the most powerful in society. Just off the top of my head, I could think of four scenarios on whether risk goes up or down, depending on whether the unemployment problem was satisfactorily "solved" or not:

AI risk\UnemploymentProblem solvedProblem unsolved
Risk reduced
With good practice in dealing
with AI problems, people and
organisations are willing and
able to address the big issues.
The world is very conscious of the
misery that unrestricted AI
research can cause, and very
wary of future disruptions. Those
at the top want to hang on to
their gains, and they are the one
with the most control over AIs
and automation research.
Risk increased
Having dealt with the easier
automation problems in a
particular way (eg taxation),
people underestimate the risk
and expect the same
solutions to work.
Society is locked into a bitter
conflict between those benefiting
from automation and those
losing out, and superintelligence
is seen through the same prism.
Those who profited from
automation are the most
powerful, and decide to push
ahead.

But of course the situation is far more complicated, with many different possible permutations, and no guarantee that the same approach will be used across the planet. And let the division into four boxes not fool us into thinking that any is of comparable probability to the others - more research is (really) needed.

A "Holy Grail" Humor Theory in One Page.

3 EGarrett 18 August 2014 10:26AM

Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here.

I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence.

Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome.



 

A "Holy Grail" Humor Theory in One Page.


Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this...


In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion:


Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety

 

Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find the thing funny and will laugh, in the smallest amounts with slight smiles, small feelings of pleasure or small diaphragm spasms. The numerator terms simply state that something has to be significantly lower in quality than what we assumed, and we must notice it and feel it's real, and the denominator states that anxiety lowers the reaction. This is because laughter is a noisy reflex that threatens someone else's status, so if there is a chance of violence from the person, a danger to threatening a loved one's status, or a predator or other threat from making noise, the reflex will be mitigated. The common feeling amongst those situations, anxiety, has come to cause this.

This may appear to be an ad hoc hypothesis, but unlike those, this can clearly unite and explain everything we've observed about humor, including our cultural sayings and the scientific observations of the previous incomplete theories. Some noticed that it involves surprise, some noticed that it involves things being incorrect, all noticed the pleasure without seeing the reason. This covers all of it, naturally, and with a core concept simple enough to explain to a child. Our sayings, like "it's too soon" for a joke after a tragedy, can all be covered as well ("too soon" indicates that we still have anxiety associated with the event).

The previous confusion about humor came from a few things. For one, there are at least 4 types of laughter: At ourselves, at others we know, at others we don't know (who have an average expectation), and directly at the person with whom we're speaking. We often laugh for one reason instead of the other, like "bad jokes" making us laugh at the teller. In addition, besides physical failure, like slipping, we also have a basic laugh instinct for mental failure, through misplacement. We sense attempts to order things that have gone wrong. Puns and similar references trigger this. Furthermore, we laugh loudest when we notice multiple errors (quality-gaps) at once, like a person dressed foolishly (such as a court jester), exposing errors by others.

We call this the "Status Loss Theory," and we've written two papers on it. The first is 6 pages, offers a chart of old theories and explains this more, with 7 examples. The second is 27 pages and goes through 40 more examples, applying this concept to sayings, comedians, shows, memes, and other comedy types, and even drawing predictions from the theory that have been verified by very recent neurology studies, to hopefully exhaustively demonstrate the idea's explanatory power. If it's not complete, it should still make enough progress to greatly advance humor study. If it is, it should redefine the field. Thanks for your time.

[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff

38 Kaj_Sotala 17 August 2014 02:40PM

Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).

At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.

In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.

Meetup : Israel Less Wrong Meetup: Communities and the rationalist community

1 SoftFlare 18 August 2014 08:16AM

Discussion article for the meetup : Israel Less Wrong Meetup: Communities and the rationalist community

WHEN: 21 August 2014 07:00:00PM (+0300)

WHERE: 98 Yigal Alon St., 29th floor, Tel Aviv

We're going to have a meetup on Thursday, August 21st at Google Israel's offices, Electra Tower, 98 Yigal Alon st., Tel Aviv.

This time will be a discussion of Communities, Community Building and the Rationalist

Community. We will have an overview of interesting things other meetups do around the world and we will discuss outreach, attracting more members and growing our own community. We'll also talk about fun things we want to do in future meetups.

This meetup will have a lectur-y section, and then we will have a discussion on how to make our community better.

We'll start the meetup at 19:00, and we'll go on as much as we like to. (We've had great success with the earlier hour last meetup and so we're continuing on the trend)

Please come on time, as we will begin the discussions and talks close to when we start. But, if you can only come later, thats totally ok!

We'll meet at the 29th floor of the building (Note: Not where Google Campus is). If you arrive and cant find your way around, call Anatoly who is graciously hosting us at 054-245-1060.

The Israeli Less Wrong Meetup happens once every two weeks. This is to allow people who cant make it to a meetup to not have to wait a whole month to meet again, and because we'd like to have both subject based and social meetups without having to wait for a month in between.

If you have any question feel free to email me at hochbergg@gmail.com or call me at 054-533-0678.

Discussion article for the meetup : Israel Less Wrong Meetup: Communities and the rationalist community

Group Rationality Diary, August 16-31

1 therufs 18 August 2014 02:33AM

This is the public group instrumental rationality diary for August 16-31. 

It's a place to record and chat about it if you have done, or are actively doing, things like: 

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: August 1-15

Rationality diaries archive

Three methods of attaining change

6 Stefan_Schubert 16 August 2014 03:38PM

Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):

1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes. 

2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.

3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.

Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.

Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.

Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.

Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.

Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.

My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)

I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.

Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.

I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.

Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)

Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.

Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.

 

Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).

View more: Next