Misdiagnosed Asperger's syndrome is ruining my life.
So I've been rejected for conscription in the IDF because the psychiatrist thinks the Asperger's diagnosis I received as a child means that there is something wrong with me. Never mind that I've been examined very recently and been recommended for enlistment, he thinks that even though I probably don't have Asperger's, there must be something wrong with me because in the past I've had trouble socially. Of course I have no such problems now, but it's not as if he's going to risk his job in the face of anything less than perfection.
(This, btw, is what I meant when I said there was no such thing as a competent mental health professional- the entire system works against evidence-based methods.)
There has to be something wrong with this, some way that I can appeal. I have no idea of the Israeli legal process and I'm not sure if I could just write a letter to someone, or if I might need a lawyer. I can definitely prove that there is nothing psychologically wrong with me. I just have no idea where to turn, no idea how to do anything, and have no allies whatsoever. I feel like my life is collapsing, and I do have very good reasons personally for wanting to join the army. It's not just something I felt like doing.
This community obviously has better things to do than this sort of thing. But I feel like I'm going to explode if I can't talk to anyone, or get some idea of what I can do. I feel almost as if I'm becoming mentally ill.
You have a set amount of "weirdness points". Spend them wisely.
I've heard of the concept of "weirdness points" many times before, but after a bit of searching I can't find a definitive post describing the concept, so I've decided to make one. As a disclaimer, I don't think the evidence backing this post is all that strong and I am skeptical, but I do think it's strong enough to be worth considering, and I'm probably going to make some minor life changes based on it.
-
Chances are that if you're reading this post, you're probably a bit weird in some way.
No offense, of course. In fact, I actually mean it as a compliment. Weirdness is incredibly important. If people weren't willing to deviate from society and hold weird beliefs, we wouldn't have had the important social movements that ended slavery and pushed back against racism, that created democracy, that expanded social roles for women, and that made the world a better place in numerous other ways.
Many things we take for granted now as why our current society as great were once... weird.
Joseph Overton theorized that policy develops through six stages: unthinkable, then radical, then acceptable, then sensible, then popular, then actual policy. We could see this happen with many policies -- currently same-sex marriage is making its way from popular to actual policy, but not to long ago it was merely acceptable, and not too long before that it was pretty radical.
Some good ideas are currently in the radical range. Effective altruism itself is such a collection of beliefs typical people would consider pretty radical. Many people think donating 3% of their income is a lot, let alone the 10% demand that Giving What We Can places, or the 50%+ that some people in the community do.
And that's not all. Others would suggest that everyone become vegetarian, advocating for open borders and/or universal basic income, theabolishment of gendered language, having more resources into mitigating existential risk, focusing on research into Friendly AI, cryonicsand curing death, etc.
While many of these ideas might make the world a better place if made into policy, all of these ideas are pretty weird.
Weirdness, of course, is a drawback. People take weird opinions less seriously.
The absurdity heuristic is a real bias that people -- even you -- have. If an idea sounds weird to you, you're less likely to try and believe it,even if there's overwhelming evidence. And social proof matters -- if less people believe something, people will be less likely to believe it. Lastly, don't forget the halo effect -- if one part of you seems weird, the rest of you will seem weird too!
(Update: apparently this concept is, itself, already known to social psychology as idiosyncrasy credits. Thanks, Mr. Commenter!)
...But we can use this knowledge to our advantage. The halo effect can work in reverse -- if we're normal in many ways, our weird beliefs will seem more normal too. If we have a notion of weirdness as a kind of currency that we have a limited supply of, we can spend it wisely, without looking like a crank.
All of this leads to the following actionable principles:
Recognize you only have a few "weirdness points" to spend. Trying to convince all your friends to donate 50% of their income to MIRI, become a vegan, get a cryonics plan, and demand open borders will be met with a lot of resistance. But -- I hypothesize -- that if you pick one of these ideas and push it, you'll have a lot more success.
Spend your weirdness points effectively. Perhaps it's really important that people advocate for open borders. But, perhaps, getting people to donate to developing world health would overall do more good. In that case, I'd focus on moving donations to the developing world and leave open borders alone, even though it is really important. You should triage your weirdness effectively the same way you would triage your donations.
Clean up and look good. Lookism is a problem in society, and I wish people could look "weird" and still be socially acceptable. But if you're a guy wearing a dress in public, or some punk rocker vegan advocate, recognize that you're spending your weirdness points fighting lookism, which means less weirdness points to spend promoting veganism or something else.
Advocate for more "normal" policies that are almost as good. Of course, allocating your "weirdness points" on a few issues doesn't mean you have to stop advocating for other important issues -- just consider being less weird about it. Perhaps universal basic income truly would be a very effective policy to help the poor in the United States. But reforming the earned income tax credit and relaxing zoning laws would also both do a lot to help the poor in the US, and such suggestions aren't weird.
Use the foot-in-door technique and the door-in-face technique. The foot-in-door technique involves starting with a small ask and gradually building up the ask, such as suggesting people donate a little bit effectively, and then gradually get them to take the Giving What We Can Pledge. The door-in-face technique involves making a big ask (e.g., join Giving What We Can) and then substituting it for a smaller ask, like the Life You Can Save pledge or Try Out Giving.
Reconsider effective altruism's clustering of beliefs. Right now, effective altruism is associated strongly with donating a lot of money and donating effectively, less strongly with impact in career choice, veganism, and existential risk. Of course, I'm not saying that we should drop some of these memes completely. But maybe EA should disconnect a bit more and compartmentalize -- for example, leaving AI risk to MIRI, for example, and not talk about it much, say, on 80,000 Hours. And maybe instead of asking people to both give more AND give more effectively, we could focus more exclusively on asking people to donate what they already do more effectively.
Evaluate the above with more research. While I think the evidence base behind this is decent, it's not great and I haven't spent that much time developing it. I think we should look into this more with a review of the relevant literature and some careful, targeted, market research on the individual beliefs within effective altruism (how weird are they?) and how they should be connected or left disconnected. Maybe this has already been done some?
-
Also discussed on the EA Forum and EA Facebook group.
First(?) Rationalist elected to state government
Has no one else mentioned this on LW yet?
Elizabeth Edwards has been elected as a New Hampshire State Rep, self-identifies as a Rationalist and explicitly mentions Less Wrong in her first post-election blog post.
Sorry if this is a repost
Anthropic signature: strange anti-correlations
Imagine that the only way that civilization could be destroyed was by a large pandemic that occurred at the same time as a large recession, so that governments and other organisations were too weakened to address the pandemic properly.
Then if we looked at the past, as observers in a non-destroyed civilization, what would we expect to see? We could see years with no pandemics or no recessions; we could see mild pandemics, mild recessions, or combinations of the two; we could see large pandemics with no or mild recessions; or we could see large recessions with no or mild pandemics. We wouldn't see large pandemics combined with large recessions, as that would have caused us to never come into existence. These are the only things ruled out by anthropic effects.
Assume that pandemics and recessions are independent (at least, in any given year) in terms of "objective" (non-anthropic) probabilities. Then what would we see? We would see that pandemics and recessions appear to be independent when either of them are of small intensity. But as the intensity rose, they would start to become anti-correlated, with a large version of one completely precluding a large version of the other.
The effect is even clearer if we have a probabilistic relation between pandemics, recessions and extinction (something like: extinction risk proportional to product of recession size times pandemic size). Then we would see an anti-correlation rising smoothly with intensity.
Thus one way of looking for anthropic effects in humanity's past is to look for different classes of incidents that are uncorrelated at small magnitude, and anti-correlated at large magnitudes. More generally, to look for different classes of incidents where the correlation changes at different magnitudes - without any obvious reasons. Than might be the signature of an anthropic disaster we missed - or rather, that missed us.
Ottawa meetup: Applied Rationality Series, Value of Information
The sixth talk in the Ottawa Applied Rationality series will take place on Tuesday, May 20th at 7:00 pm, at the Canal Royal Oak in Ottawa, Canada. These events are run through the Ottawa Skeptics meetup group. See link here: http://www.meetup.com/Ottawa-Skeptics/events/181263842/
The usual format consists of an approximately 15 minute talk on the topic of the day, followed by semi-structured exercises, followed by beers and unstructured discussion. Previous topics have included "Rational Debating", "Bayes", "Calibration", "Rationality Dojo" (a review session), and "Goal Factoring."
If you are not from Ottawa, but are interested in running meetups in your area, send me a PM and I can give you the PowerPoints that I use for these talks.
[Sequence announcement] Introduction to Mechanism Design
Mechanism design is the theory of how to construct institutions for strategic agents, spanning applications like voting systems, school admissions, regulation of monopolists, and auction design. Think of it as the engineering side of game theory, building algorithms for strategic agents. While it doesn't have much to say about rationality directly, mechanism design provides tools and results for anyone interested in world optimization.
In this sequence, I'll touch on
- The basic mechanism design framework, including the revelation principle and incentive compatibility.
- The Gibbard-Satterthwaite impossibility theorem for strategyproof implementation (a close analogue of Arrow's Theorem), and restricted domains like single-peaked or quasilinear preference where we do have positive results.
- The power and limitations of Vickrey-Clarke-Groves mechanisms for efficiently allocating goods, generalizing Vickrey's second-price auction.
- Characterizations of incentive-compatible mechanisms and the revenue equivalence theorem.
- Profit-maximizing auctions.
- The Myerson-Satterthwaite impossibility for bilateral trade.
- Two-sided matching markets à la Gale and Shapley, school choice, and kidney exchange.
As the list above suggests, this sequence is going to be semi-technical, but my foremost goal is to convey the intuition behind these results. Since mechanism design builds on game theory, take a look at Yvain's Game Theory Intro if you want to brush up.
Various resources:
- For further introduction, you can start with the popular or more scholarly survey of mechanism design from the 2007 Nobel memoriam prize in economics.
- Jeff Ely has lecture notes and short videos to accompany an undergraduate class in microeconomic theory from the perspective of mechanism design.
- The textbook A Toolbox for Economic Design by Dimitrios Diamantaras is very accessible and comprehensive if you can get ahold of a copy.
- Tilman Börgers has a draft textbook intended for graduate students.
- Chapters 9-16 of Algorithmic Game Theory and chapters 10-11 of Multiagent Systems cover various topics in mechanism design from the perspective of computer scientists.
- Video lectures introducing market design and computational aspects of mechanism design.
I plan on following up on this sequence with another focusing on group rationality and information aggregation, surveying scoring rules and prediction markets among other topics.
Suggestions and comments are very welcome.
Rational Evangelism
Not "rationality evangelism", which CFAR is doing already if I understand their mission. "Rational evangelism", which is what CFAR would do if they were Catholic missionaries.
If you believe in Hell, as many people very truly do, it is hard for Hell not to seem like the world's most important problem.
To some extent, proselytizing religions treat Hell with respect--they spend billions of dollars trying to save sinners, and the most devout often spend their lives preaching the Gospel (insert non-Christian variant).
But is Hell given enough respect? Every group meets with mixed success in solving its problems, but the problem of eternal suffering leaves little room for "mixed success". Even the most powerful religions are stuck in patterns that make the work of salvation very difficult indeed. And some seem willing to reduce their evangelism* for reasons that aren't especially convincing in the face of "nonbelievers are quite possibly going to burn, or at least be outside the presence of God, forever".
What if you were a rationalist who viewed Hell like certain Less Wrongers view the Singularity? (This belief would be hard to reconcile with rationalism generally, but for the sake of argument...) How would you tackle the problem of eternal suffering with the same passion we spend on probability theory and friendly AI?
I wrote a long thought experiment to better define the problem, involving a religion called "Normomism", but it was awkward. There are plenty of real religions whose members believe in Hell, or at least in a Heaven that many people aren't going to (also a terrible loss). Some have a stated mission of saving as many people as possible from a bad afterlife.
So where are they falling short?
If you were the Pope, or the Caliph, or the supreme dictator of some smaller religion, what tactics would you use to convince more people to do and believe exactly the things that would save them--whether that's faith or good works? Why haven't these tactics been tried already? Is there really much room for improvement?
Spreading the Word
This post isn't a dig at believers, though it does seem like many people don't act on their sincere belief in an eternal afterlife. (I don't mind when people try to convert me--at least they care!)
My main point: It's worth considering that people who believe in Very Bad Future Outcomes have been working to prevent those outcomes for thousands of years, and have stumbled upon formidable techniques for doing so.
I've thought for a while about rational evangelism, and it's surprisingly hard to come up with ways that people like Rick Warren and Jerry Lovett could improve their methodology. (Read Lovett's "contact me" paragraph for the part that really impressed me.)
We speak often of borrowing from religion, but these conversations mostly touch on social bonding, rather than what it means to spread ideas so important that the fate of the human race depends on them. ("Raising the Sanity Waterline" is a great start, but those ideas haven't been the focus of many recent posts.)
I'm not saying this is a perfect comparison. The rationalist war for the future won't be fought one soul at a time, and we won't save anyone with a deathbed confession.
But cryogenic freezing does exist. And on a more collective level, convincing the right people that the far future matters could be a coup on the level of Constantine's conversion.
CFAR is doing good things in the direction of rationality evangelism. How can the rest of us do more?
Living Like We Mean It
This movement is going places. But I fear we may spend too much time (at least proportionally) arguing amongst ourselves, when bringing others into the fold is a key piece of the puzzle. And if we’d like to expand the flock (or, more appropriately, the herd of cats), what can we learn from history’s most persuasive organizations?
I often pass up my chance to talk to people about something as simple as Givewell, let alone existential risk, and it's been a long time since I last name-dropped a Less Wrong technique. I don't think I'm alone in this.**
I've met plenty of Christians who exude the same optimism and conviviality as a Rick Warren or a Ned Flanders. These kinds of people are a major boon for the Christian religion. Even if most of us are introverts, what's stopping us from teaching ourselves to live the same way?
Still, I'm new here, and I could be wrong. What do you think?
* Text editor's giving me some trouble, but the link is here: http://www.relevantmagazine.com/god/practical-faith/evangelism-interfaith-world
** Peter Boghossian's Manual for Creating Atheists has lots to say about using rationality techniques in the course of daily life, and is well worth reading, though the author can be an asshole sometimes.
Meetup : Yale: Initial Meetup
Discussion article for the meetup : Yale: Initial Meetup
Hi. If anyone who goes to Yale is interested in meeting up, I'll be in Bass Cafe on Sunday, February 16, from 2 to 4 pm. I'll bring my copy of Good and Real for identification purposes.
Discussion article for the meetup : Yale: Initial Meetup
To capture anti-death intuitions, include memory in utilitarianism
EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions:
- If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
- If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
- If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
The mechanics of my recent productivity
A decade ago, I decided to save the world. I was fourteen, and the world certainly wasn't going to save itself.
I fumbled around for nine years; it's surprising how long one can fumble around. I somehow managed to miss the whole idea of existential risk and the whole concept of an intelligence explosion. I had plenty of other ideas in my head, and while I spent a lot of time honing them, I wasn't particularly looking for new ones.
A year ago, I finally read the LessWrong sequences. My road here was roundabout, almost comical. It took me a while to come to terms with the implications of what I'd read.
Five months ago, after resolving a few internal crises, I started donating to MIRI and studying math.
Three weeks ago, I attended the December MIRI workshop on logic, probability, and reflection. I was invited to visit for the first two days and stay longer if things went well. They did: I was able to make some meaningful contributions.
On Saturday I was invited to become a MIRI research associate.
It's been an exciting year, to say the least.
(ETA: Note that being a research associate gives me access to a number of MIRI resources, but is not a full time position. I will be doing FAI research, but it will be done outside of work. I will be retaining my day job and continuing to donate.)
(ETA: As of 1 April 2014, I am a full-time researcher at MIRI.)
(ETA: As of 1 June 2015, I am now the executive director of MIRI.)
To commemorate the occasion — and because a few people have expressed interest in my efforts — I'll be writing a series of posts about my experience, about what I did and how I did it. This is the first post in the series.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)