LINK: Quora brainstorms strategies for containing AI risk
In case you haven't seen it yet, Quora hosted an interesting discussion of different strategies for containing / mitigating AI risk, boosted by a $500 prize for the best answer. It attracted sci-fi author David Brin, U. Michigan professor Igor Markov, and several people with PhDs in machine learning, neuroscience, or artificial intelligence. Most people from LessWrong will disagree with most of the answers, but I think the article is useful as a quick overview of the variety of opinions that ordinary smart people have about AI risk.
Help Build a Landing Page for Existential Risk?
The Big Orange Donate Button
Traditional charities, like Oxfam, Greenpeace, and Amnesty International, almost all have a big orange button marked "Donate" right on the very first page that loads when you go to their websites. The landing page for a major charity usually also has vivid graphics and some short, easy-to-read text that tells you about an easy-to-understand project that the charity is currently working on.
I assume that part of why charities have converged on this design is that potential donors often have short attention spans, and that one of the best ways to maximize donations is to make it as easy as possible for casual visitors to the website to (a) confirm that they approve of the charity's work, and (b) actually make a donation. The more obstacles you put between google-searching on the name of a charity and the 'donate' button, the more people will get bored or distracted, and the fewer donations you'll get.
Unfortunately, there doesn't seem to be any such streamlined interface for people who want to learn about existential risks and maybe donate some money to help prevent them. The website on existential risk run by the Future of Humanity Institute reads more like a syllabus or a CV than like an advertisement or a brochure -- there's nowhere to donate money; it's just a bunch of citations. The Less Wrong wiki page on x-risk is more concerned with defining and analyzing existential risks than it is with explaining, in simple concrete language, what problems currently threaten to wipe out humanity. The Center for the Study of Existential Risk has a landing page that focuses on a video of a TED talk that goes on for a full minute before mentioning any specific existential risks, and if you want to make a donation you have to click through three separate links and then fill out a survey. Heck, even the Skoll Global Threats Fund, which you would think would be, you know, designed to raise funds to combat global threats, has neither a donate button nor (so far as I can tell) a link to a donation page. These websites are *not* optimized for encouraging casual visitors to learn basic facts or make a donation.
A Landing Page for Casual Donors
That's fine with me; I imagine the leading x-risk websites are accomplishing other purposes that their owners feel are more important than catering to casual visitors -- but there ought to be at least one website that's meant for your buddy from high school who doesn't know or care about effective altruism, who expressed concern one night over a couple of beers that the world might be in some trouble, and who had a brief urge to do something about it. I want to help capture your buddy's urge to take action.
To that end, I've registered x-risk.com as a domain name, and I'm building a very simple website that will feature roughly 100 words of text about 10 of the most important existential risks, together with a photo or graphic that illustrates each risk, a "donate" button that takes you straight to a webpage that lets you donate to an organization working to prevent the risk, and a "learn more" button that takes you to a website with more detailed info on the risk. I will pay to host the website for one year, and if the website generates significant traffic, then I'll take up a collection to keep it going indefinitely.
Blurbs, Photos, and URLs
I would like your help generating content for the website -- if you are willing to write a 100-word blurb, if you own a useful photo (or can create one, or know of one in the public domain), or if you have the URL handy for a webpage that lets you donate money to mitigating or preventing a specific x-risk, please post it in the comments! I can, in theory, do all of that work myself, but I would prefer to make this more of a community project, and there is a significant risk that I will get bored and give up if I have to literally do it all myself.
Important: to avoid mind-killing debates, please do NOT contribute opinions about which risks are the most important unless you are ALSO contributing a blurb, photo, or URL in the same comment. Let's get the website built and launched first, and then we can always edit some of the pages later if there's a consensus in favor of including an additional x-risk. If you see someone sharing an opinion about the relative priority of risk and the opinion isn't right next to a useful resource, please vote that comment down until it disappears.
Thank you very much for your help! I hope to see you all in the future. :-)

Reminder Memes
EDIT: Apologies to anyone who wasted time with this; I did not intend it to go live. I left a draft post up on a computer that had an automatic system update; it must have posted as the window was terminated.
LINK: Human Bio-engineering and Coherent Extrapolated Volition
This article has some interesting commentary on how humans might modify themselves to combat global warming, including the use of drugs that would increase empathy, increase willpower, or increase aversion to meat. The interviewer points out that such techniques could involve implanting non-native beliefs in people's minds, and the researcher responds that any such beliefs would be essentially built up out of the person's existing desires and wishes -- the analysis is remarkably similar to the analysis Eliezer gives in explaining Coherent Extrapolated Volition.
No hate mail about how meat does or doesn't cause global warming, please -- the interesting bit is the analysis of CEV, not the analysis of climate change.
Hearsay, Double Hearsay, and Bayesian Updates
Application of: How Much Evidence Does It Take?
(trigger warning: some description of domestic violence)
Summary: I discuss the strengths and weaknesses of one way that the American legal system tries to assess and cope with the unreliability of certain kinds of evidence. After explaining the relevant rules with references to a few recent famous cases and a non-notable case that I'm working on now, I briefly consider whether this part of the evidence code is above or below the sanity waterline, and suggest an incremental improvement.
List of Donors, Fall 2011
This discussion-level article is a handy place for people to share info about their recent donations, especially donations to unusually efficient or effective charities. Feel free to post your one-time donations, your recurring donations, and/or any interesting changes in your donation habits. Gratitude and appreciation for other people's donations is also very welcome.
Should Rationalists Tip at Restaurants?
Related to: Robin Hanson on Freakonomics
DISCLAIMER: This is an exploration of a theoretical economics problem. This is not advice. I have not made up my mind. Please do not cite this post as support for your plans to indulge in mayhem or selfishness.
Meetup : San Francisco & Tortuga Go Surfing
Discussion article for the meetup : San Francisco & Tortuga Go Surfing
Low-key, afternoon fun at the beach for beginners & amateurs. I'll be offering rides from central San Francisco down to Pacifica, where the surf shop rents wetsuits & boogieboards or surfboards for about $20 total.
The surf shop has a free, suburban-style parking lot in a plaza with several good places to get lunch, and the back door opens right onto the beach. There are great views of the waves and the surrounding hills. Surf is usually light (2-6 ft waves), if a little choppy.
I'll be leaving San Francisco at 1 pm, arriving at Pacifica by 2 pm, and leaving around 6 pm. Everyone is welcome to join me for whatever part of that time they like; if you need a ride, please RSVP as soon as possible by e-mailing jasongreenlowe@gmail.com or texting (954) 464-3040.
This event will be repeated several more times over the summer, but we'll all get better as time goes on, so join us now while we're still just as awkward as you are!
Discussion article for the meetup : San Francisco & Tortuga Go Surfing
An Outside View on Less Wrong's Advice
Related to: Intellectual Hipsters, X-Rationality: Not So Great, The Importance of Self-Doubt, That Other Kind of Status,
This is a scheduled upgrade of a post that I have been working on in the discussion section. Thanks to all the commenters there, and special thanks to atucker, Gabriel, Jonathan_Graehl, kpreid, XiXiDu, and Yvain for helping me express myself more clearly.
-------------------
For the most part, I am excited about growing as a rationalist. I attended the Berkeley minicamp; I play with Anki cards and Wits & Wagers; I use Google Scholar and spreadsheets to try to predict the consequences of my actions.
There is a part of me, though, that bristles at some of the rationalist 'culture' on Less Wrong, for lack of a better word. The advice, the tone, the vibe 'feels' wrong, somehow. If you forced me to use more precise language, I might say that, for several years now, I have kept a variety of procedural heuristics running in the background that help me ferret out bullshit, partisanship, wishful thinking, and other unsound debating tactics -- and important content on this website manages to trigger most of them. Yvain suggests that something about the rapid spread of positive affect not obviously tied to any concrete accomplishments may be stimulating a sort of anti-viral memetic defense system.
Note that I am *not* claiming that Less Wrong is a cult. Nobody who runs a cult has such a good sense of humor about it. And if they do, they're so dangerous that it doesn't matter what I say about it. No, if anything, "cultishness" is a straw man. Eliezer will not make you abandon your friends and family, run away to a far-off mountain retreat and drink poison Kool-Aid. But, he *might* convince you to believe in some very silly things and take some very silly actions.
Therefore, in the spirit of John Stuart Mill, I am writing a one-article attack on much of we seem to hold dear. If there is anything true about what I'm saying, you will want to read it, so that you can alter your commitments accordingly. Even if, as seems more likely, you don't believe a word I say, reading a semi-intelligent attack on your values and mentally responding to it will probably help you more clearly understand what it is that you do believe.
Meetup : Marin & SF Less Wrong Make Things Go Boom
Discussion article for the meetup : Marin & SF Less Wrong Make Things Go Boom
Happy Independence Day! San Francisco's Less Wrong group is having a meetup tomorrow, Tuesday July 5th, at our usual spot at Mel's Diner, starting at 7 pm sharp. This week, we are being joined by the better part of Marin County's 6 LW'ers, and I'd really like to show them a good time. I will bring fireworks to dinner that we can all set off together in as public a place as people have risk-tolerance for. In honor of the national holiday, this week's discussion theme will be "When (if ever) does it make sense to sacrifice yourself for a greater cause?" Note that we will try hard not to debate any -particular- cause for too long; the goal is not so much as to find The Answer To Politics as it is to find general heuristics that can help us evaluate political claims.
UPDATE -- The meetup attracted 4 people, and we all had a nice time. Planning for another joint SF-Marin meetup is in the works, this time hopefully with a bit more notice to everyone so that more of those who want to can attend. Photos of BOOM are available here.
Discussion article for the meetup : Marin & SF Less Wrong Make Things Go Boom
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)