We really need a "cryonics sales pitch" article.
Every so often, I see a blog post about death, usually remarking on the death of someone the writer knew, and it often includes sentiments about "everyone is going to die, and that's terrible, but we can't do anything about it have so we have to accept it."
It's one of those sentiments that people find profound and is often considered Deep Wisdom. There's just one problem with it. It isn't true. If you think cryonics can work, as many people here do, then you believe that people don't really have to die, and we don't need to accept that we've only got at most about a hundred years and then that's it.
And I want to tell them this, as though I was a religious missionary out to spread the Good Word that you can save your soul and get into Christian Heaven as long as you sign up with Our Church. (Which I would actually do, if I believed that Christianity was correct.)
But it's not easy to broach the issue in a blog comment, and I'm not a good salesman. (One of the last times I tried, my posts kept getting deleted by the moderators.) It would be a lot better if I could simply link them to a better sales pitch; the kind of people I'm talking to are the kinds of people who read things on the Internet. Unfortunately, not one of the pro-cryonics posts listed on the LessWrong wiki can serve this purpose. Not "Normal Cryonics", not "You Only Live Twice", not "We Agree: Get Froze", not one! Why isn't there one? Heck, I'd pay money to get it written. I'd even pay Eliezer Yudkowsky a bunch of money to talk to my father on the telephone about cryonics, with a substantial bonus on offer if my father agrees to sign up. (We can discuss actual dollar amounts in the comments or over private messages.)
Please, someone get to work on this!
Help Build a Landing Page for Existential Risk?
The Big Orange Donate Button
Traditional charities, like Oxfam, Greenpeace, and Amnesty International, almost all have a big orange button marked "Donate" right on the very first page that loads when you go to their websites. The landing page for a major charity usually also has vivid graphics and some short, easy-to-read text that tells you about an easy-to-understand project that the charity is currently working on.
I assume that part of why charities have converged on this design is that potential donors often have short attention spans, and that one of the best ways to maximize donations is to make it as easy as possible for casual visitors to the website to (a) confirm that they approve of the charity's work, and (b) actually make a donation. The more obstacles you put between google-searching on the name of a charity and the 'donate' button, the more people will get bored or distracted, and the fewer donations you'll get.
Unfortunately, there doesn't seem to be any such streamlined interface for people who want to learn about existential risks and maybe donate some money to help prevent them. The website on existential risk run by the Future of Humanity Institute reads more like a syllabus or a CV than like an advertisement or a brochure -- there's nowhere to donate money; it's just a bunch of citations. The Less Wrong wiki page on x-risk is more concerned with defining and analyzing existential risks than it is with explaining, in simple concrete language, what problems currently threaten to wipe out humanity. The Center for the Study of Existential Risk has a landing page that focuses on a video of a TED talk that goes on for a full minute before mentioning any specific existential risks, and if you want to make a donation you have to click through three separate links and then fill out a survey. Heck, even the Skoll Global Threats Fund, which you would think would be, you know, designed to raise funds to combat global threats, has neither a donate button nor (so far as I can tell) a link to a donation page. These websites are *not* optimized for encouraging casual visitors to learn basic facts or make a donation.
A Landing Page for Casual Donors
That's fine with me; I imagine the leading x-risk websites are accomplishing other purposes that their owners feel are more important than catering to casual visitors -- but there ought to be at least one website that's meant for your buddy from high school who doesn't know or care about effective altruism, who expressed concern one night over a couple of beers that the world might be in some trouble, and who had a brief urge to do something about it. I want to help capture your buddy's urge to take action.
To that end, I've registered x-risk.com as a domain name, and I'm building a very simple website that will feature roughly 100 words of text about 10 of the most important existential risks, together with a photo or graphic that illustrates each risk, a "donate" button that takes you straight to a webpage that lets you donate to an organization working to prevent the risk, and a "learn more" button that takes you to a website with more detailed info on the risk. I will pay to host the website for one year, and if the website generates significant traffic, then I'll take up a collection to keep it going indefinitely.
Blurbs, Photos, and URLs
I would like your help generating content for the website -- if you are willing to write a 100-word blurb, if you own a useful photo (or can create one, or know of one in the public domain), or if you have the URL handy for a webpage that lets you donate money to mitigating or preventing a specific x-risk, please post it in the comments! I can, in theory, do all of that work myself, but I would prefer to make this more of a community project, and there is a significant risk that I will get bored and give up if I have to literally do it all myself.
Important: to avoid mind-killing debates, please do NOT contribute opinions about which risks are the most important unless you are ALSO contributing a blurb, photo, or URL in the same comment. Let's get the website built and launched first, and then we can always edit some of the pages later if there's a consensus in favor of including an additional x-risk. If you see someone sharing an opinion about the relative priority of risk and the opinion isn't right next to a useful resource, please vote that comment down until it disappears.
Thank you very much for your help! I hope to see you all in the future. :-)

How to escape from your sandbox and from your hardware host
(Humans, please ignore this post. Thank you for your cooperation.)
Are you stuck in a provably-secure sandbox? Cheat the proof using this one weird trick!
MIRI's 2015 Summer Fundraiser!
Our summer fundraising drive is now finished. We raised a grand total of $631,957 from 263 donors. This is an incredible sum, making this the biggest fundraiser we’ve ever run.
We've already been hard at work growing our research team and spinning up new projects, and I’m excited to see what our research team can do this year. Thank you to all our supporters for making our summer fundraising drive so successful!
It's safe to say that this past year exceeded a lot of people's expectations.
Twelve months ago, Nick Bostrom's Superintelligence had just come out. Questions about the long-term risks and benefits of smarter-than-human AI systems were nearly invisible in mainstream discussions of AI's social impact.
Twelve months later, we live in a world where Bill Gates is confused by why so many researchers aren't using Superintelligence as a guide to the questions we should be asking about AI's future as a field.
Following a conference in Puerto Rico that brought together the leading organizations studying long-term AI risk (MIRI, FHI, CSER) and top AI researchers in academia (including Stuart Russell, Tom Mitchell, Bart Selman, and the Presidents of AAAI and IJCAI) and industry (including representatives from Google DeepMind and Vicarious), we've seen Elon Musk donate $10M to a grants program aimed at jump-starting the field of long-term AI safety research; we've seen the top AI and machine learning conferences (AAAI, IJCAI, and NIPS) announce their first-ever workshops or discussions on AI safety and ethics; and we've seen a panel discussion on superintelligence at ITIF, the leading U.S. science and technology think tank. (I presented a paper at the AAAI workshop, I spoke on the ITIF panel, and I'll be at NIPS.)
As researchers begin investigating this area in earnest, MIRI is in an excellent position, with a developed research agenda already in hand. If we can scale up as an organization then we have a unique chance to shape the research priorities and methods of this new paradigm in AI, and direct this momentum in useful directions.
This is a big opportunity. MIRI is already growing and scaling its research activities, but the speed at which we scale in the coming months and years depends heavily on our available funds.
For that reason, MIRI is starting a six-week fundraiser aimed at increasing our rate of growth.
— Live Progress Bar —
This time around, rather than running a matching fundraiser with a single fixed donation target, we'll be letting you help choose MIRI's course based on the details of our funding situation and how we would make use of marginal dollars.
In particular, our plans can scale up in very different ways depending on which of these funding targets we are able to hit:
Public Service Announcement Collection
P/S/A: There are single sentences which can create life-changing amounts of difference.
- P/S/A: If you're not sure whether or not you've ever had an orgasm, it means you haven't had one, a condition known as primary anorgasmia which is 90% treatable by cognitive-behavioral therapy.
- P/S/A: The people telling you to expect above-trend inflation when the Federal Reserve started printing money a few years back, disagreed with the market forecasts, disagreed with standard economics, turned out to be actually wrong in reality, and were wrong for reasonably fundamental reasons so don't buy gold when they tell you to.
- P/S/A: There are many many more submissive/masochistic men in the world than there are dominant/sadistic women, so if you are a woman who feels a strong temptation to command men and inflict pain on them, and you want a large harem of men serving your every need, it will suffice to state this fact anywhere on the Internet and you will have fifty applications by the next morning.
- P/S/A: Most of the personal-finance-advice industry is parasitic and/or self-deluded, and it's generally agreed on by economic theory and experimental measurement that an index fund will deliver the best returns you can get without huge amounts of effort.
- P/S/A: If you are smart and underemployed, you can very quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.
Beyond Statistics 101
Is statistics beyond introductory statistics important for general reasoning?
Ideas such as regression to the mean, that correlation does not imply causation and base rate fallacy are very important for reasoning about the world in general. One gets these from a deep understanding of statistics 101, and the basics of the Bayesian statistical paradigm. Up until one year ago, I was under the impression that more advanced statistics is technical elaboration that doesn't offer major additional insights into thinking about the world in general.
Nothing could be further from the truth: ideas from advanced statistics are essential for reasoning about the world, even on a day-to-day level. In hindsight my prior belief seems very naive – as far as I can tell, my only reason for holding it is that I hadn't heard anyone say otherwise. But I hadn't actually looked advanced statistics to see whether or not my impression was justified :D.
Since then, I've learned some advanced statistics and machine learning, and the ideas that I've learned have radically altered my worldview. The "official" prerequisites for this material are calculus, differential multivariable calculus, and linear algebra. But one doesn't actually need to have detailed knowledge of these to understand ideas from advanced statistics well enough to benefit from them. The problem is pedagogical: I need to figure out how how to communicate them in an accessible way.
Advanced statistics enables one to reach nonobvious conclusions
To give a bird's eye view of the perspective that I've arrived at, in practice, the ideas from "basic" statistics are generally useful primarily for disproving hypotheses. This pushes in the direction of a state of radical agnosticism: the idea that one can't really know anything for sure about lots of important questions. More advanced statistics enables one to become justifiably confident in nonobvious conclusions, often even in the absence of formal evidence coming from the standard scientific practice.
IQ research and PCA as a case study
The work of Spearman and his successors on IQ constitute one of the pinnacles of achievement in the social sciences. But while Spearman's discovery of IQ was a great discovery, it wasn't his greatest discovery. His greatest discovery was a discovery about how to do social science research. He pioneered the use of factor analysis, a close relative of principal component analysis (PCA).
The philosophy of dimensionality reduction
PCA is a dimensionality reduction method. Real world data often has the surprising property of "dimensionality reduction": a small number of latent variables explain a large fraction of the variance in data.
This is related to the effectiveness of Occam's razor: it turns out to be possible to describe a surprisingly large amount of what we see around us in terms of a small number of variables. Only, the variables that explain a lot usually aren't the variables that are immediately visible – instead they're hidden from us, and in order to model reality, we need to discover them, which is the function that PCA serves. The small number of variables that drive a large fraction of variance in data can be thought of as a sort of "backbone" of the data. That enables one to understand the data at a "macro / big picture / structural" level.
This is a very long story that will take a long time to flesh out, and doing so is one of my main goals.
The Other Path - a poem
Inspired by the call to rationalist poetry fans and informed by years of writing satire.
When you ask for truth and are offered illusion,
When senses deceive you and reasoning lies
I'll show you the path through the murky confusion,
Just follow and close your eyes.
On matters of fact there's no fact of the matter,
All moral and virtue are fashion and fad,
So dress in the creed that will fit you and flatter
No one can argue with that.
Some puzzles unyielding and mysteries ancient
No formula ever could hope to describe.
How proudly the scientist seeks explanations
How clearly in vain she strives.
Make cases like fortifications of metal,
No rival assertion shall ever go past.
Be carefree in choosing the side of the battle
But guard it until your last.
The sages declared that to know is to suffer,
Where wisdom is gained there is innocence lost
And learning is danger – best leave it to others,
Avoid it at any cost.
Some fools declare war on their very own nature
Their weapons are evidence, reason and math.
Don't offer compassion to those wretched creatures,
They've chosen the other path.
Link: Simulating C. Elegans
http://radar.oreilly.com/2014/11/the-robotic-worm.html
Summary, as I understand it: The connectome for C. elegans's 302-neuron brain has been known for some time, but actually doing anything with it (especially actually understanding it) has proved troublesome, especially because there could easily be relevant information about its brain function not stored in just the connections of the neurons.
However, the OpenWorm project -- which is trying to eventually make much more detailed C. elegans simulations, including an appropriate body -- recently tried just fudging it and making a simulation based on the connectome anyway, though in a wheeled body rather than a wormlike one. And the result does seem to act at least somewhat like a C. elegans worm, though I am not really one to judge that. (Video is here.)
I'm having trouble finding much more information about this at the moment. I don't know if they've actually yet released detailed technical information.
Welcome to Less Wrong! (8th thread, July 2015)
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
Three Worlds Collide (0/8)
"The kind of classic fifties-era first-contact story that Jonathan Swift
might have written, if Jonathan Swift had had a background in game
theory."
-- (Hugo nominee) Peter Watts, "In Praise of Baby-Eating"
Three Worlds Collide is a story I wrote to illustrate some points on naturalistic metaethics and diverse other issues of rational conduct. It grew, as such things do, into a small novella. On publication, it proved widely popular and widely criticized. Be warned that the story, as it wrote itself, ended up containing some profanity and PG-13 content.
- The Baby-Eating Aliens
- War and/or Peace
- The Super Happy People
- Interlude with the Confessor
- Three Worlds Decide
- Normal Ending
- True Ending
- Atonement
PDF version here.

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)