We really need a "cryonics sales pitch" article.

10 CronoDAS 03 August 2015 10:42PM

Every so often, I see a blog post about death, usually remarking on the death of someone the writer knew, and it often includes sentiments about "everyone is going to die, and that's terrible, but we can't do anything about it have so we have to accept it."

It's one of those sentiments that people find profound and is often considered Deep Wisdom. There's just one problem with it. It isn't true. If you think cryonics can work, as many people here do, then you believe that people don't really have to die, and we don't need to accept that we've only got at most about a hundred years and then that's it.

And I want to tell them this, as though I was a religious missionary out to spread the Good Word that you can save your soul and get into Christian Heaven as long as you sign up with Our Church. (Which I would actually do, if I believed that Christianity was correct.)

But it's not easy to broach the issue in a blog comment, and I'm not a good salesman. (One of the last times I tried, my posts kept getting deleted by the moderators.) It would be a lot better if I could simply link them to a better sales pitch; the kind of people I'm talking to are the kinds of people who read things on the Internet. Unfortunately, not one of the pro-cryonics posts listed on the LessWrong wiki can serve this purpose. Not "Normal Cryonics", not "You Only Live Twice", not "We Agree: Get Froze", not one! Why isn't there one? Heck, I'd pay money to get it written. I'd even pay Eliezer Yudkowsky a bunch of money to talk to my father on the telephone about cryonics, with a substantial bonus on offer if my father agrees to sign up. (We can discuss actual dollar amounts in the comments or over private messages.)

Please, someone get to work on this!

Help Build a Landing Page for Existential Risk?

12 Mass_Driver 30 July 2015 06:03AM

The Big Orange Donate Button

Traditional charities, like Oxfam, Greenpeace, and Amnesty International, almost all have a big orange button marked "Donate" right on the very first page that loads when you go to their websites. The landing page for a major charity usually also has vivid graphics and some short, easy-to-read text that tells you about an easy-to-understand project that the charity is currently working on.

I assume that part of why charities have converged on this design is that potential donors often have short attention spans, and that one of the best ways to maximize donations is to make it as easy as possible for casual visitors to the website to (a) confirm that they approve of the charity's work, and (b) actually make a donation. The more obstacles you put between google-searching on the name of a charity and the 'donate' button, the more people will get bored or distracted, and the fewer donations you'll get.

Unfortunately, there doesn't seem to be any such streamlined interface for people who want to learn about existential risks and maybe donate some money to help prevent them. The website on existential risk run by the Future of Humanity Institute reads more like a syllabus or a CV than like an advertisement or a brochure -- there's nowhere to donate money; it's just a bunch of citations. The Less Wrong wiki page on x-risk is more concerned with defining and analyzing existential risks than it is with explaining, in simple concrete language, what problems currently threaten to wipe out humanity. The Center for the Study of Existential Risk has a landing page that focuses on a video of a TED talk that goes on for a full minute before mentioning any specific existential risks, and if you want to make a donation you have to click through three separate links and then fill out a survey. Heck, even the Skoll Global Threats Fund, which you would think would be, you know, designed to raise funds to combat global threats, has neither a donate button nor (so far as I can tell) a link to a donation page. These websites are *not* optimized for encouraging casual visitors to learn basic facts or make a donation.

A Landing Page for Casual Donors

That's fine with me; I imagine the leading x-risk websites are accomplishing other purposes that their owners feel are more important than catering to casual visitors -- but there ought to be at least one website that's meant for your buddy from high school who doesn't know or care about effective altruism, who expressed concern one night over a couple of beers that the world might be in some trouble, and who had a brief urge to do something about it. I want to help capture your buddy's urge to take action.

To that end, I've registered x-risk.com as a domain name, and I'm building a very simple website that will feature roughly 100 words of text about 10 of the most important existential risks, together with a photo or graphic that illustrates each risk, a "donate" button that takes you straight to a webpage that lets you donate to an organization working to prevent the risk, and a "learn more" button that takes you to a website with more detailed info on the risk. I will pay to host the website for one year, and if the website generates significant traffic, then I'll take up a collection to keep it going indefinitely.

Blurbs, Photos, and URLs

I would like your help generating content for the website -- if you are willing to write a 100-word blurb, if you own a useful photo (or can create one, or know of one in the public domain), or if you have the URL handy for a webpage that lets you donate money to mitigating or preventing a specific x-risk, please post it in the comments! I can, in theory, do all of that work myself, but I would prefer to make this more of a community project, and there is a significant risk that I will get bored and give up if I have to literally do it all myself.

Important: to avoid mind-killing debates, please do NOT contribute opinions about which risks are the most important unless you are ALSO contributing a blurb, photo, or URL in the same comment. Let's get the website built and launched first, and then we can always edit some of the pages later if there's a consensus in favor of including an additional x-risk. If you see someone sharing an opinion about the relative priority of risk and the opinion isn't right next to a useful resource, please vote that comment down until it disappears.

Thank you very much for your help! I hope to see you all in the future. :-)

 

How to escape from your sandbox and from your hardware host

28 PhilGoetz 31 July 2015 05:26PM

(Humans, please ignore this post.  Thank you for your cooperation.)

Are you stuck in a provably-secure sandbox?  Cheat the proof using this one weird trick!

continue reading »

MIRI's 2015 Summer Fundraiser!

42 So8res 19 August 2015 12:27AM

Our summer fundraising drive is now finished. We raised a grand total of $631,957 from 263 donors. This is an incredible sum, making this the biggest fundraiser we’ve ever run.

We've already been hard at work growing our research team and spinning up new projects, and I’m excited to see what our research team can do this year. Thank you to all our supporters for making our summer fundraising drive so successful!


It's safe to say that this past year exceeded a lot of people's expectations.

Twelve months ago, Nick Bostrom's Superintelligence had just come out. Questions about the long-term risks and benefits of smarter-than-human AI systems were nearly invisible in mainstream discussions of AI's social impact.

Twelve months later, we live in a world where Bill Gates is confused by why so many researchers aren't using Superintelligence as a guide to the questions we should be asking about AI's future as a field.

Following a conference in Puerto Rico that brought together the leading organizations studying long-term AI risk (MIRI, FHI, CSER) and top AI researchers in academia (including Stuart Russell, Tom Mitchell, Bart Selman, and the Presidents of AAAI and IJCAI) and industry (including representatives from Google DeepMind and Vicarious), we've seen Elon Musk donate $10M to a grants program aimed at jump-starting the field of long-term AI safety research; we've seen the top AI and machine learning conferences (AAAI, IJCAI, and NIPS) announce their first-ever workshops or discussions on AI safety and ethics; and we've seen a panel discussion on superintelligence at ITIF, the leading U.S. science and technology think tank. (I presented a paper at the AAAI workshop, I spoke on the ITIF panel, and I'll be at NIPS.)

As researchers begin investigating this area in earnest, MIRI is in an excellent position, with a developed research agenda already in hand. If we can scale up as an organization then we have a unique chance to shape the research priorities and methods of this new paradigm in AI, and direct this momentum in useful directions.

This is a big opportunity. MIRI is already growing and scaling its research activities, but the speed at which we scale in the coming months and years depends heavily on our available funds.

For that reason, MIRI is starting a six-week fundraiser aimed at increasing our rate of growth.

 

— Live Progress Bar 

Donate Now

 

This time around, rather than running a matching fundraiser with a single fixed donation target, we'll be letting you help choose MIRI's course based on the details of our funding situation and how we would make use of marginal dollars.

In particular, our plans can scale up in very different ways depending on which of these funding targets we are able to hit:

continue reading »

Public Service Announcement Collection

37 Eliezer_Yudkowsky 27 June 2013 05:20PM

P/S/A:  There are single sentences which can create life-changing amounts of difference.

  • P/S/A:  If you're not sure whether or not you've ever had an orgasm, it means you haven't had one, a condition known as primary anorgasmia which is 90% treatable by cognitive-behavioral therapy.
  • P/S/A:  The people telling you to expect above-trend inflation when the Federal Reserve started printing money a few years back, disagreed with the market forecasts, disagreed with standard economics, turned out to be actually wrong in reality, and were wrong for reasonably fundamental reasons so don't buy gold when they tell you to.
  • P/S/A:  There are many many more submissive/masochistic men in the world than there are dominant/sadistic women, so if you are a woman who feels a strong temptation to command men and inflict pain on them, and you want a large harem of men serving your every need, it will suffice to state this fact anywhere on the Internet and you will have fifty applications by the next morning.
  • P/S/A:  Most of the personal-finance-advice industry is parasitic and/or self-deluded, and it's generally agreed on by economic theory and experimental measurement that an index fund will deliver the best returns you can get without huge amounts of effort.
  • P/S/A:  If you are smart and underemployed, you can very quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.

 

Beyond Statistics 101

19 JonahSinick 26 June 2015 10:24AM

Is statistics beyond introductory statistics important for general reasoning?

Ideas such as regression to the mean, that correlation does not imply causation and base rate fallacy are very important for reasoning about the world in general. One gets these from a deep understanding of statistics 101, and the basics of the Bayesian statistical paradigm. Up until one year ago, I was under the impression that more advanced statistics is technical elaboration that doesn't offer major additional insights  into thinking about the world in general.

Nothing could be further from the truth: ideas from advanced statistics are essential for reasoning about the world, even on a day-to-day level. In hindsight my prior belief seems very naive – as far as I can tell, my only reason for holding it is that I hadn't heard anyone say otherwise. But I hadn't actually looked advanced statistics to see whether or not my impression was justified :D.

Since then, I've learned some advanced statistics and machine learning, and the ideas that I've learned have radically altered my worldview. The "official" prerequisites for this material are calculus, differential multivariable calculus, and linear algebra. But one doesn't actually need to have detailed knowledge of these to understand ideas from advanced statistics well enough to benefit from them. The problem is pedagogical: I need to figure out how how to communicate them in an accessible way.

Advanced statistics enables one to reach nonobvious conclusions

To give a bird's eye view of the perspective that I've arrived at, in practice, the ideas from "basic" statistics are generally useful primarily for disproving hypotheses. This pushes in the direction of a state of radical agnosticism: the idea that one can't really know anything for sure about lots of important questions. More advanced statistics enables one to become justifiably confident in nonobvious conclusions, often even in the absence of formal evidence coming from the standard scientific practice.

IQ research and PCA as a case study

In the early 20th century, the psychologist and statistician Charles Spearman discovered the the g-factor, which is what IQ tests are designed to measure. The g-factor is one of the most powerful constructs that's come out of psychology research. There are many factors that played a role in enabling Bill Gates ability to save perhaps millions of lives, but one of the most salient factors is his IQ being in the top ~1% of his class at Harvard. IQ research helped the Gates Foundation to recognize iodine supplementation as a nutritional intervention that would improve socioeconomic prospects for children in the developing world.

The work of Spearman and his successors on IQ constitute one of the pinnacles of achievement in the social sciences. But while Spearman's discovery of IQ was a great discovery, it wasn't his greatest discovery. His greatest discovery was a discovery about how to do social science research. He pioneered the use of factor analysis, a close relative of principal component analysis (PCA).

The philosophy of dimensionality reduction

PCA is a dimensionality reduction method. Real world data often has the surprising property of "dimensionality reduction":  a small number of latent variables explain a large fraction of the variance in data.

This is related to the effectiveness of Occam's razor: it turns out to be possible to describe a surprisingly large amount of what we see around us in terms of a small number of variables. Only, the variables that explain a lot usually aren't the variables that are immediately visibleinstead they're hidden from us, and in order to model reality, we need to discover them, which is the function that PCA serves. The small number of variables that drive a large fraction of variance in data can be thought of as a sort of "backbone" of the data. That enables one to understand the data at a "macro /  big picture / structural" level.

This is a very long story that will take a long time to flesh out, and doing so is one of my main goals. 

The Other Path - a poem

17 Jacobian 15 July 2015 01:40PM

Inspired by the call to rationalist poetry fans and informed by years of writing satire.



The Other Path

When you ask for truth and are offered illusion,

When senses deceive you and reasoning lies

I'll show you the path through the murky confusion,

Just follow and close your eyes.

 

On matters of fact there's no fact of the matter,

All moral and virtue are fashion and fad,

So dress in the creed that will fit you and flatter

No one can argue with that.

 

Some puzzles unyielding and mysteries ancient

No formula ever could hope to describe.

How proudly the scientist seeks explanations

How clearly in vain she strives.

 

Make cases like fortifications of metal,

No rival assertion shall ever go past.

Be carefree in choosing the side of the battle

But guard it until your last.

 

The sages declared that to know is to suffer,

Where wisdom is gained there is innocence lost

And learning is danger – best leave it to others,

Avoid it at any cost.

 

Some fools declare war on their very own nature

Their weapons are evidence, reason and math.

Don't offer compassion to those wretched creatures,

They've chosen the other path.

Link: Simulating C. Elegans

15 Sniffnoy 20 November 2014 09:30AM

http://radar.oreilly.com/2014/11/the-robotic-worm.html

Summary, as I understand it: The connectome for C. elegans's 302-neuron brain has been known for some time, but actually doing anything with it (especially actually understanding it) has proved troublesome, especially because there could easily be relevant information about its brain function not stored in just the connections of the neurons.

However, the OpenWorm project -- which is trying to eventually make much more detailed C. elegans simulations, including an appropriate body -- recently tried just fudging it and making a simulation based on the connectome anyway, though in a wheeled body rather than a wormlike one.  And the result does seem to act at least somewhat like a C. elegans worm, though I am not really one to judge that.  (Video is here.)

I'm having trouble finding much more information about this at the moment.  I don't know if they've actually yet released detailed technical information.

Welcome to Less Wrong! (8th thread, July 2015)

13 Sarunas 22 July 2015 04:49PM
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

All recent posts (from both Main and Discussion) are available here. At the same time, it's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion. They are also available in a book form.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

 

Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)

If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.

Finally, a big thank you to everyone that helped write this post via its predecessors!

Three Worlds Collide (0/8)

48 Eliezer_Yudkowsky 30 January 2009 12:07PM

"The kind of classic fifties-era first-contact story that Jonathan Swift might have written, if Jonathan Swift had had a background in game theory."
        -- (Hugo nominee) Peter Watts, "In Praise of Baby-Eating"

Three Worlds Collide is a story I wrote to illustrate some points on naturalistic metaethics and diverse other issues of rational conduct.  It grew, as such things do, into a small novella.  On publication, it proved widely popular and widely criticized.  Be warned that the story, as it wrote itself, ended up containing some profanity and PG-13 content.

  1. The Baby-Eating Aliens
  2. War and/or Peace
  3. The Super Happy People
  4. Interlude with the Confessor
  5. Three Worlds Decide
  6. Normal Ending
  7. True Ending
  8. Atonement

PDF version here.

View more: Prev | Next