Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Anthropic signature: strange anti-correlations

47 Stuart_Armstrong 21 October 2014 04:59PM

Imagine that the only way that civilization could be destroyed was by a large pandemic that occurred at the same time as a large recession, so that governments and other organisations were too weakened to address the pandemic properly.

Then if we looked at the past, as observers in a non-destroyed civilization, what would we expect to see? We could see years with no pandemics or no recessions; we could see mild pandemics, mild recessions, or combinations of the two; we could see large pandemics with no or mild recessions; or we could see large recessions with no or mild pandemics. We wouldn't see large pandemics combined with large recessions, as that would have caused us to never come into existence. These are the only things ruled out by anthropic effects.

Assume that pandemics and recessions are independent (at least, in any given year) in terms of "objective" (non-anthropic) probabilities. Then what would we see? We would see that pandemics and recessions appear to be independent when either of them are of small intensity. But as the intensity rose, they would start to become anti-correlated, with a large version of one completely precluding a large version of the other.

The effect is even clearer if we have a probabilistic relation between pandemics, recessions and extinction (something like: extinction risk proportional to product of recession size times pandemic size). Then we would see an anti-correlation rising smoothly with intensity.

Thus one way of looking for anthropic effects in humanity's past is to look for different classes of incidents that are uncorrelated at small magnitude, and anti-correlated at large magnitudes. More generally, to look for different classes of incidents where the correlation changes at different magnitudes - without any obvious reasons. Than might be the signature of an anthropic disaster we missed - or rather, that missed us.

Don't Be Afraid of Asking Personally Important Questions of Less Wrong

33 Evan_Gaensbauer 26 October 2014 08:02AM

Related: LessWrong as a social catalyst

I primarily used my prior user profile asked questions of Less Wrong. When I had an inkling for a query, but I didn't have a fully formed hypothesis, I wouldn't know how to search for answers to questions on the Internet myself, so I asked them on Less Wrong.

The reception I have received has been mostly positive. Here are some examples:

  • Back when I was trying to figure out which college major to pursue, I queried Less Wrong about which one was worth my effort. I followed this up with a discussion about whether it was worthwhile for me to personally, and for someone in general, to pursue graduate studies.


Other student users of Less Wrong benefit from the insight of their careered peers:

  • A friend of mine was considering pursuing medicine to earn to give. In the same vein as my own discussion, I suggested he pose the question to Less Wrong. He didn't feel like it at first, so I posed the query on his behalf. In a few days, he received feedback which returned the conclusion that pursuing medical school through the avenues he was aiming for wasn't his best option relative to his other considerations. He showed up in the thread, and expressed his gratitude. The entirely of the online rationalist community was willing to respond provided valuable information for an important question. It might have taken him lots of time, attention, and effort to look for the answers to this question by himself.

In engaging with Less Wrong, with the rest of you, my experience has been that Less Wrong isn't just useful as an archive of blog posts, but is actively useful as a community of people. As weird as it may seem, you can generate positive externalities that improve the lives of others by merely writing a blog post. This extends to responding in the comments section too. Stupid Questions Threads are a great example of this; you can ask questions about your procedural knowledge gaps without fear of reprisal.  People have gotten great responses about getting more value out of conversations, to being more socially successful, to learning and appreciating music as an adult. Less Wrong may be one of few online communities for which even the comments sections are useful, by default.

For the above examples, even though they weren't the most popular discussions ever started, and likely didn't get as much traffic, it's because of the feedback they received that made them more personally valuable to one individual than several others.

At the CFAR workshop I attended, I was taught two relevant skills:

* Value of Information Calculations: formulating a question well, and performing a Fermi estimate, or back-of-the-envelope question, in an attempt to answer it, generates quantified insight you wouldn't have otherwise anticipated.

* Social Comfort Zone Expansion: humans tend to have a greater aversion to trying new things socially than is maximally effective, and one way of viscerally teaching System 1 this lesson is by trial-and-error of taking small risks. Posting on Less Wrong, especially, e.g., in a special thread, is really a low-risk action. The pang of losing karma can feel real, but losing karma really is a valuable signal that one should try again differently. Also, it's not as bad as failing at taking risks in meatspace.

When I've received downvotes for a comment, I interpret that as useful information, try to model what I did wrong, and thank others for correcting my confused thinking. If you're worried about writing something embarrassing, that's understandable, but realize it's a fact about your untested anticipations, not a fact about everyone else using Less Wrong. There are dozens of brilliant people with valuable insights at the ready, reading Less Wrong for fun, and who like helping us answer our own personal questions. Users shminux and Carl Shulman are exemplars of this.

This isn't an issue for all users, but I feel as if not enough users are taking advantage of the personal value they can get by asking more questions. This post is intended to encourage them. User Gunnar Zarnacke suggested that if enough examples of experiences like this were accrued, it could be transformed into some sort of repository of personal value from Less Wrong

Fixing Moral Hazards In Business Science

31 DavidLS 18 October 2014 09:10PM

I'm a LW reader, two time CFAR alumnus, and rationalist entrepreneur.

Today I want to talk about something insidious: marketing studies.

Until recently I considered studies of this nature merely unfortunate, funny even. However, my recent experiences have caused me to realize the situation is much more serious than this. Product studies are the public's most frequent interaction with science. By tolerating (or worse, expecting) shitty science in commerce, we are undermining the public's perception of science as a whole.

The good news is this appears fixable. I think we can change how startups perform their studies immediately, and use that success to progressively expand.

Product studies have three features that break the assumptions of traditional science: (1) few if any follow up studies will be performed, (2) the scientists are in a position of moral hazard, and (3) the corporation seeking the study is in a position of moral hazard (for example, the filing cabinet bias becomes more of a "filing cabinet exploit" if you have low morals and the budget to perform 20 studies).

I believe we can address points 1 and 2 directly, and overcome point 3 by appealing to greed.

Here's what I'm proposing: we create a webapp that acts as a high quality (though less flexible) alternative to a Contract Research Organization. Since it's a webapp, the cost of doing these less flexible studies will approach the cost of the raw product to be tested. For most web companies, that's $0.

If we spend the time to design the standard protocols well, it's quite plausible any studies done using this webapp will be in the top 1% in terms of scientific rigor.

With the cost low, and the quality high, such a system might become the startup equivalent of citation needed. Once we have a significant number of startups using the system, and as we add support for more experiment types, we will hopefully attract progressively larger corporations.

Is anyone interested in helping? I will personally write the webapp and pay for the security audit if we can reach quorum on the initial protocols.

Companies who have expressed interested in using such a system if we build it:

(I sent out my inquiries at 10pm yesterday, and every one of these companies got back to me by 3am. I don't believe "startups love this idea" is an overstatement.)

So the question is: how do we do this right?

Here are some initial features we should consider:

  • Data will be collected by a webapp controlled by a trusted third party, and will only be editable by study participants.
  • The results will be computed by software decided on before the data is collected.
  • Studies will be published regardless of positive or negative results.
  • Studies will have mandatory general-purpose safety questions. (web-only products likely exempt)
  • Follow up studies will be mandatory for continued use of results in advertisements.
  • All software/contracts/questions used will be open sourced (MIT) and creative commons licensed (CC BY), allowing for easier cross-product comparisons.

Any placebos used in the studies must be available for purchase as long as the results are used in advertising, allowing for trivial study replication.

Significant contributors will receive:

  • Co-authorship on the published paper for the protocol.
  • (Through the paper) an Erdos number of 2.
  • The satisfaction of knowing you personally helped restore science's good name (hopefully).

I'm hoping that if a system like this catches on, we can get an "effective startups" movement going :)

So how do we do this right?

How to write an academic paper, according to me

31 Stuart_Armstrong 15 October 2014 12:29PM

Disclaimer: this is entirely a personal viewpoint, formed by a few years of publication in a few academic fields. EDIT: Many of the comments are very worth reading as well.

Having recently finished a very rushed submission (turns out you can write a novel paper in a day and half, if you're willing to sacrifice quality and sanity), I've been thinking about how academic papers are structured - and more importantly, how they should be structured.

It seems to me that the key is to consider the audience. Or, more precisely, to consider the audiences - because different people will read you paper to different depths, and you should cater to all of them. An example of this is the "inverted pyramid" structure for many news articles - start with the salient facts, then the most important details, then fill in the other details. The idea is to ensure that a reader who stops reading at any point (which happens often) will nevertheless have got the most complete impression that it was possible to convey in the bit that they did read.

So, with that model in mind, lets consider the different levels of audience for a general academic paper (of course, some papers just can't fit into this mould, but many can):

 

continue reading »

Maybe you want to maximise paperclips too

29 dougclow 30 October 2014 09:40PM

As most LWers will know, Clippy the Paperclip Maximiser is a superintelligence who wants to tile the universe with paperclips. The LessWrong wiki entry for Paperclip Maximizer says that:

The goal of maximizing paperclips is chosen for illustrative purposes because it is very unlikely to be implemented

I think that a massively powerful star-faring entity - whether a Friendly AI, a far-future human civilisation, aliens, or whatever - might indeed end up essentially converting huge swathes of matter in to paperclips. Whether a massively powerful star-faring entity is likely to arise is, of course, a separate question. But if it does arise, it could well want to tile the universe with paperclips.

Let me explain.

paperclips

To travel across the stars and achieve whatever noble goals you might have (assuming they scale up), you are going to want energy. A lot of energy. Where do you get it? Well, at interstellar scales, your only options are nuclear fusion or maybe fission.

Iron has the strongest binding energy of any nucleus. If you have elements lighter than iron, you can release energy through nuclear fusion - sticking atoms together to make bigger ones. If you have elements heavier than iron, you can release energy through nuclear fission - splitting atoms apart to make smaller ones. We can do this now for a handful of elements (mostly selected isotopes of uranium, plutonium and hydrogen) but we don’t know how to do this for most of the others - yet. But it looks thermodynamically possible. So if you are a massively powerful and massively clever galaxy-hopping agent, you can extract maximum energy for your purposes by taking up all the non-ferrous matter you can find and turning it in to iron, getting energy through fusion or fission as appropriate.

You leave behind you a cold, dark trail of iron.

That seems a little grim. If you have any aesthetic sense, you might want to make it prettier, to leave an enduring sign of values beyond mere energy acquisition. With careful engineering, it would take only a tiny, tiny amount of extra effort to leave the iron arranged in to beautiful shapes. Curves are nice. What do you call a lump of iron arranged in to an artfully-twisted shape? I think we could reasonably call it a paperclip.

Over time, the amount of space that you’ve visited and harvested for energy will increase, and the amount of space available for your noble goals - or for anyone else’s - will decrease. Gradually but steadily, you are converting the universe in to artfully-twisted pieces of iron. To an onlooker who doesn’t see or understand your noble goals, you will look a lot like you are a paperclip maximiser. In Eliezer’s terms, your desire to do so is an instrumental value, not a terminal value. But - conditional on my wild speculations about energy sources here being correct - it’s what you’ll do.

A Day Without Defaults

28 katydee 20 October 2014 08:07AM

Author's note: this post was written on Sunday, Oct. 19th. Its sequel will be written on Sunday, Oct. 27th.

Last night, I went to bed content with a fun and eventful weekend gone by. This morning, I woke up, took a shower, did my morning exercises, and began eat breakfast before making the commute up to work.

At the breakfast table, though, I was surprised to learn that it was Sunday, not Monday. I had misremembered what day it was and in fact had an entire day ahead of me with nothing on the agenda. At first, this wasn't very interesting, but then I started thinking. What to do with an entirely free day, without any real routine?

I realized that I didn't particularly know what to do, so I decided that I would simply live a day without defaults. At each moment of the day, I would act only in accordance with my curiosity and genuine interest. If I noticed myself becoming bored, disinterested, or otherwise less than enthused about what was going on, I would stop doing it.

What I found was quite surprising. I spent much less time doing routine activities like reading the news and browsing discussion boards, and much more time doing things that I've "always wanted to get around to"-- meditation, trying out a new exercise routine, even just spending some time walking around outside and relaxing in the sun.

Further, this seemed to actually make me more productive. When I sat down to get some work done, it was because I was legitimately interested in finishing my work and curious as to whether I could use a new method I had thought up in order to solve it. I was able to resolve something that's been annoying me for a while in much less time than I thought it would take.

By the end of the day, I started thinking "is there any reason that I don't spend every day like this?" As far as I can tell, there isn't really. I do have a few work tasks that I consider relatively uninteresting, but there are multiple solutions to that problem that I suspect I can implement relatively easily.

My plan is to spend the next week doing the same thing that I did today and then report back. I'm excited to let you all know what I find!

What false beliefs have you held and why were you wrong?

27 Punoxysm 16 October 2014 05:58PM

What is something you used to believe, preferably something concrete with direct or implied predictions, that you now know was dead wrong. Was your belief rational given what you knew and could know back then, or was it irrational, and why?

 

Edit: I feel like some of these are getting a bit glib and political. Please try to explain what false assumptions or biases were underlying your beliefs - be introspective - this is LW after all.

In the grim darkness of the far future there is only war continued by other means

25 Eneasz 21 October 2014 07:39PM

(cross-posted from my blog)

I. PvE vs PvP

Ever since it’s advent in Doom, PvP (Player vs Player) has been an integral part of almost every major video game. This is annoying to PvE (Player vs Environment) fans like myself, especially when PvE mechanics are altered (read: simplified and degraded) for the purpose of accommodating the PvP game play. Even in games which are ostensibly about the story & world, rather than direct player-on-player competition.

The reason for this comes down to simple math. PvE content is expensive to make. An hour of game play can take many dozens, or nowadays even hundreds, of man-hours of labor to produce. And once you’ve completed a PvE game, you’re done with it. There’s nothing else, you’ve reached “The End”, congrats. You can replay it a few times if you really loved it, like re-reading a book, but the content is the same. MMORGs recycle content by forcing you to grind bosses many times before you can move on to the next one, but that’s as fun as the word “grind” makes it sound. At that point people are there more for the social aspect and the occasional high than the core gameplay itself.

PvP “content”, OTOH, generates itself. Other humans keep learning and getting better and improvising new tactics. Every encounter has the potential to be new and exciting, and they always come with the rush of triumphing over another person (or the crush of losing to the same).

But much more to the point – In PvE potentially everyone can make it into the halls of “Finished The Game;” and if everyone is special, no one is. PvP has a very small elite – there can only be one #1 player, and people are always scrabbling for that position, or defending it. PvP harnesses our status-seeking instinct to get us to provide challenges for each other rather than forcing the game developers to develop new challenges for us. It’s far more cost effective, and a single man-hour of labor can produce hundreds or thousands of hours of game play. StarCraft  continued to be played at a massive level for 12 years after its release, until it was replaced with StarCraft II.

So if you want to keep people occupied for a looooong time without running out of game-world, focus on PvP

II. Science as PvE

In the distant past (in internet time) I commented at LessWrong that discovering new aspects of reality was exciting and filled me with awe and wonder and the normal “Science is Awesome” applause lights (and yes, I still feel that way). And I sneered at the status-grubbing of politicians and administrators and basically everyone that we in nerd culture disliked in high school. How temporary and near-sighted! How zero-sum (and often negative-sum!), draining resources we could use for actual positive-sum efforts like exploration and research! A pox on their houses!

Someone replied, asking why anyone should care about the minutia of lifeless, non-agenty forces? How could anyone expend so much of their mental efforts on such trivia when there are these complex, elaborate status games one can play instead? Feints and countermoves and gambits and evasions, with hidden score-keeping and persistent reputation effects… and that’s just the first layer! The subtle ballet of interaction is difficult even to watch, and when you get billions of dancers interacting it can be the most exhilarating experience of all.

This was the first time I’d ever been confronted with status-behavior as anything other than wasteful. Of course I rejected it at first, because no one is allowed to win arguments in real time. But it stuck with me. I now see the game play, and it is intricate. It puts Playing At The Next Level in a whole new perspective. It is the constant refinement and challenge and lack of a final completion-condition that is the heart of PvP. Human status games are the PvP of real life.

Which, by extension of the metaphor, makes Scientific Progress the PvE of real life. Which makes sense. It is us versus the environment in the most literal sense. It is content that was provided to us, rather than what we make ourselves. And it is limited – in theory we could some day learn everything that there is to learn.

III. The Best of All Possible Worlds

I’ve mentioned a few times I have difficulty accepting reality as real. Say you were trying to keep a limitless number of humans happy and occupied for an unbounded amount of time. You provide them PvE content to get them started. But you don’t want the PvE content to be their primary focus, both because they’ll eventually run out of it, and also because once they’ve completely cracked it there’s a good chance they’ll realize they’re in a simulation. You know that PvP is a good substitute for PvE for most people, often a superior one, and that PvP can get recursively more complex and intricate without limit and keep the humans endlessly occupied and happy, as long as their neuro-architecture is right. It’d be really great if they happened to evolve in a way that made status-seeking extremely pleasurable for the majority of the species, even if that did mean that the ones losing badly were constantly miserable regardless of their objective well-being. This would mean far, far more lives could be lived and enjoyed without running out of content than would otherwise be possible.

IV. Implications for CEV

It’s said that the Coherent Extrapolated Volition is “our wish if we knew more, thought faster, were more the people we wished to be, hard grown up farther together.” This implies a resolution to many conflicts. No more endless bickering about whether the Red Tribe is racist or the Blue Tribe is arrogant pricks. A more unified way of looking at the world that breaks down those conceptual conflicts. But if PvP play really is an integral part of the human experience, a true CEV would notice that, and would preserve these differences instead. To ensure that we always had rival factions sniping at each other over irreconcilable, fundamental disagreements in how reality should be approached and how problems should be solved. To forever keep partisan politics as part of the human condition, so we have this dance to enjoy. Stripping it out would be akin to removing humanity’s love of music, because dancing inefficiently consumes great amounts of energy just so we can end up where we started.

Carl von Clausewitz famously said “War is the continuation of politics by other means.”  The correlate of “Politics is the continuation of war by other means” has already been proposed. It is not unreasonable to speculate that in the grim darkness of the far future, there is only war continued by other means. Which, all things considered, is greatly preferable to actual war. As long as people like Scott are around to try to keep things somewhat civil and preventing an escalation into violence, this may not be terrible.

Questions on Theism

23 Aiyen 08 October 2014 09:02PM

Long time lurker, but I've barely posted anything. I'd like to ask Less Wrong for help.

Reading various articles by the Rationalist Community over the years, here, on Slate Star Codex and a few other websites, I have found that nearly all of it makes sense. Wonderful sense, in fact, the kind of sense you only really find when the author is actually thinking through the implications of what they're saying, and it's been a breath of fresh air. I generally agree, and when I don't it's clear why we're differing, typically due to a dispute in priors.

Except in theism/atheism.

In my experience, when atheists make their case, they assume a universe without miracles, i.e. a universe that looks like one would expect if there was no God. Given this assumption, atheism is obviously the rational and correct stance to take. And generally, Christian apologists make the same assumption! They assert miracles in the Bible, but do not point to any accounts of contemporary supernatural activity. And given such assumptions, the only way one can make a case for Christianity is with logical fallacies, which is exactly what most apologists do. The thing is though, there are plenty of contemporary miracle accounts.

Near death experiences. Answers to prayer that seem to violate the laws of physics. I'm comfortable with dismissing Christian claims that an event was "more than coincidence", because given how many people are praying and looking for God's hand in events, and the fact that an unanswered prayer will generally be forgotten while a seemingly-answered one will be remembered, one would expect to see "more than coincidence" in any universe with believers, whether or not there was a God. But there are a LOT of people out there claiming to have seen events that one would expect to never occur in a naturalistic universe. I even recall reading an atheist's account of his deconversion (I believe it was Luke Muehlhauser; apologies if I'm misremembering) in which he states that as a Christian, he witnessed healings he could not explain. Now, one could say that these accounts are the result of people lying, but I expect people to be rather more honest than that, and Luke is hardly going to make up evidence for the Christian God in an article promoting unbelief! One could say that "miracles" are misunderstood natural events, but there are plenty of accounts that seem pretty unlikely without Divine intervention-I've even read claims by Christians that they had seen people raised from the dead by prayer. And so I'd like to know how atheists respond to the evidence of miracles.

This isn't just idle curiosity. I am currently a Christian (or maybe an agnostic terrified of ending up on the wrong side of Pascal's Wager), and when you actually take religion seriously, it can be a HUGE drain on quality of life. I find myself being frightened of hell, feeling guilty when I do things that don't hurt anyone but are still considered sins, and feeling guilty when I try to plan out my life, wondering if I should just put my plans in God's hands. To make matters worse, I grew up in a dysfunctional, very Christian family, and my emotions seem to be convinced that being a true Christian means acting like my parents (who were terrible role models; emulating them means losing at life).

I'm aware of plenty of arguments for non-belief: Occam's Razor giving atheism as one's starting prior in the absence of strong evidence for God, the existence of many contradictory religions proving that humanity tends to generate false gods, claims in Genesis that are simply false (Man created from mud, woman from a rib, etc. have been conclusively debunked by science), commands given by God that seem horrifyingly immoral, no known reason why Christ's death would be needed for human redemption (many apologists try to explain this, but their reasoning never makes sense), no known reason why if belief in Jesus is so important why God wouldn't make himself blatantly obvious, hell seeming like an infinite injustice, the Bible claiming that any prayer prayed in faith will be answered contrasted with the real world where this isn't the case, a study I read about in which praying for the sick didn't improve results at all (and the group that was told they were being prayed for actually had worse results!), etc. All of this, plus the fact that it seems that nearly everyone who's put real effort into their epistemology doesn't believe and moreover is very confident in their nonbelief (I am reminded of Eliezer's comment that he would be less worried about a machine that destroys the universe if the Christian God exists than one that has a one in a trillion chance of destroying us) makes me wonder if there really isn't a God, and in so realizing this, I can put down burdens that have been hurting for nearly my entire life. But the argument from miracles keeps me in faith, keeps me frightened. If there is a good argument against miracles, learning it could be life changing.

Thank you very much. I do not have words to describe how much this means to me.

2014 Less Wrong Census/Survey - Call For Critiques/Questions

18 Yvain 11 October 2014 06:39AM

It's that time of year again. Actually, a little earlier than that time of year, but I'm pushing it ahead a little to match when Ozy and I expect to have more free time to process the results.

The first draft of the 2014 Less Wrong Census/Survey is complete (see 2013 results here) .

You can see the survey below if you promise not to try to take the survey because it's not done yet and this is just an example!

2014 Less Wrong Census/Survey Draft

I want two things from you.

First, please critique this draft (it's much the same as last year's). Tell me if any questions are unclear, misleading, offensive, confusing, or stupid. Tell me if the survey is so unbearably long that you would never possibly take it. Tell me if anything needs to be rephrased.

Second, I am willing to include any question you want in the Super Extra Bonus Questions section, as long as it is not offensive, super-long-and-involved, or really dumb. Please post any questions you want there. Please be specific - not "Ask something about taxes" but give the exact question you want me to ask as well as all answer choices.

Try not to add more than a few questions per person, unless you're sure yours are really interesting. Please also don't add any questions that aren't very easily sort-able by a computer program like SPSS unless you can commit to sorting the answers yourself.

I will probably post the survey to Main and officially open it for responses sometime early next week.

Solstice 2014 - Kickstarter and Megameetup

18 Raemon 10 October 2014 05:55PM


Summary:

  • We're running another Winter Solstice kickstarter - this is to fund the venue, musicians, food, drink and decorations for a big event in NYC on December 20th, as well as to record more music and print a larger run of the Solstice Book of Traditions. 
  • I'd also like to raise additional money so I can focus full time for the next couple months on helping other communities run their own version of the event, tailored to meet their particular needs while still feeling like part of a cohesive, broader movement - and giving the attendees a genuinely powerful experience. 

The Beginning

Four years ago, twenty NYC rationalists gathered in a room to celebrate the Winter Solstice. We sang songs and told stories about things that seemed very important to us. The precariousness of human life. The thousands of years of labor and curiosity that led us from a dangerous stone age to the modern world. The potential to create something even better, if humanity can get our act together and survive long enough.

One of the most important ideas we honored was the importance of facing truths, even when they are uncomfortable or make us feel silly or are outright terrifying. Over the evening, we gradually extinguished candles, acknowledging harsher and harsher elements of reality.

Until we sat in absolute darkness - aware that humanity is flawed, and alone, in an unforgivingly neutral universe. 

But also aware that we sit beside people who care deeply about truth, and about our future. Aware that across the world, people are working to give humanity a bright tomorrow, and that we have the power to help. Aware that across history, people have looked impossible situations in the face, and through ingenuity and persperation, made the impossible happen.

That seemed worth celebrating. 


The Story So Far

As it turned out, this resonated with people outside the rationality community. When we ran the event again in 2012, non-religious but non-Less Wrong attended the event and told me they found it very moving. In 2013, we pushed it much larger - I ran a kickstarter campaign to fund a big event in NYC. 

A hundred and fifty people from various communities attended. From Less Wrong in particular, we had groups from Boston, San Francisco, North Carolina, Ottawa, and Ohio among other places. The following day was one of the largest East Coast Megameetups. 

Meanwhile, in the Bay Area, several people put together an event that gathered around 80 attendees. In Boston and Vancouever and Leipzig Germany, people ran smaller events. This is shaping up to take root as a legitimate holiday, celebrating human history and our potential future.

This year, we want to do that all again. I also want to dedicate more time to helping other people run their events. Getting people to start celebrating a new holiday is a tricky feat. I've learned a lot about how to go about that and want to help others run polished events that feel connecting and inspirational.


So, what's happening, and how can you help?

 

  • The Big Solstice itself will be Saturday, December 20th at 7:00 PM. To fund it, we're aiming to raise $7500 on kickstarter. This is enough to fund the aforementioned venue, food, drink, live musicians, record new music, and print a larger run of the Solstice Book of Traditions. It'll also pay some expenses for the Megameetup. Please consider contributing to the kickstarter.
  • If you'd like to host your own Solstice (either a large or a private one) and would like advice, please contact me at raemon777@gmail.com and we'll work something out.
  • There will also be Solstices (of varying sizes) run by Less Wrong / EA folk held in the Bay Area, Seattle, Boston and Leipzig. (There will probably be a larger but non-LW-centered Solstice in Los Angeles and Boston as well).
  • In NYC, there will be a Rationality and EA Megameetup running from Friday, Dec 19th through Sunday evening.
    • Friday night and Saturday morning: Arrival, Settling
    • Saturday at 2PM - 4:30PM: Unconference (20 minute talks, workshops or discussions)
    • Saturday at 7PM: Big Solstice
    • Sunday at Noon: Unconference 2
    • Sunday at 2PM: Strategic New Years Resolution Planning
    • Sunday at 3PM: Discussion of creating private ritual for individual communities
  • If you're interested in coming to the Megameetup, please fill out this form saying how many people you're bringing, whether you're interested in giving a talk, and whether you're bringing a vehicle, so we can plan adequately. (We have lots of crash space, but not infinite bedding, so bringing sleeping bags or blankets would be helpful)

Effective Altruism?

 

Now, at Less Wrong we like to talk about how to spend money effectively, so I should be clear about a few things. I'm raising non-trivial money for this, but this should be coming out of people's Warm Fuzzies Budgets, not their Effective Altruism budgets. This is a big, end of the year community feel-good festival. 

That said, I do think this is an especially important form of Warm Fuzzies. I've had EA-type folk come to me and tell me the Solstice inspired them to work harder, make life changes, or that it gave them an emotional booster charge to keep going even when things were hard. I hope, eventually, to have this measurable in some fashion such that I can point to it and say "yes, this was important, and EA folk should definitely consider it important." 

But I'm not especially betting on that, and there are some failure modes where the Solstice ends up cannibalizing more resources that could have went towards direct impact. So, please consider that this may be especially valuable entertainment, that pushes culture in a direction where EA ideas can go more mainstream and gives hardcore EAs a motivational boost. But I encourage you to support it with dollars that wouldn't have gone towards direct Effective Altruism.

[Link] Animated Video - The Useful Idea of Truth (Part 1/3)

18 Joshua_Blaine 04 October 2014 11:05PM

I have taken this well received post by Eliezer, and remade the first third of it into a short and quickly paced youtube video here: http://youtu.be/L2dNANRIALs

The goals of this post are re-introducing the lessons explored in the original (for anyone not yet familiar with them), as well as asking the question of whether this format is actually suited for the lessons LessWrong tries to teach. What are your thoughts?

 

Wikipedia articles from the future

17 snarles 29 October 2014 12:49PM

Speculation is important for forecasting; it's also fun.  Speculation is usually conveyed in two forms: in the form of an argument, or encapsulated in fiction; each has their advantages, but both tend to be time-consuming.  Presenting speculation in the form of an argument involves researching relevant background and formulating logical arguments.  Presenting speculation in the form of fiction requires world-building and storytelling skills, but it can quickly give the reader an impression of the "big picture" implications of the speculation; this can be more effective at establishing the "emotional plausibility" of the speculation.

I suggest a storytelling medium which can combine attributes of both arguments and fiction, but requires less work than either. That is the "wikipedia article from the future." Fiction written by inexperienced sci-fi writers tends to generate into a speculative encyclopedia anyways--why not just admit that you want to write an encyclopedia in the first place?  Post your "Wikipedia articles from the future" below.

Logical uncertainty reading list

17 alex_zag_al 18 October 2014 07:16PM

This was originally part of a post I wrote on logical uncertainty, but it turned out to be post-sized itself, so I'm splitting it off.

Daniel Garber's article Old Evidence and Logical Omniscience in Bayesian Confirmation Theory. Wonderful framing of the problem--explains the relevance of logical uncertainty to the Bayesian theory of confirmation of hypotheses by evidence.

Articles on using logical uncertainty for Friendly AI theory: qmaurmann's Meditations on Löb’s theorem and probabilistic logic. Squark's Overcoming the Loebian obstacle using evidence logic. And Paul Christiano, Eliezer Yudkowsky, Paul Herreshoff, and Mihaly Barasz's Definibility of Truth in Probabilistic Logic. So8res's walkthrough of that paper, and qmaurmann's notes. eli_sennesh like just made a post on this: Logics for Mind-Building Should Have Computational Meaning.

Benja's post on using logical uncertainty for updateless decision theory.

cousin_it's Notes on logical priors from the MIRI workshop. Addresses a logical-uncertainty version of Counterfactual Mugging, but in the course of that has, well, notes on logical priors that are more general.

Reasoning with Limited Resources and Assigning Probabilities to Arithmetical Statements, by Haim Gaifman. Shows that you can give up on giving logically equivalent statements equal probabilities without much sacrifice of the elegance of your theory. Also, gives a beautifully written framing of the problem.

manfred's early post, and later sequence. Amazingly readable. The proposal gives up Gaifman's elegance, but actually goes as far as assigning probabilities to mathematical statements and using them, whereas Gaifman never follows through to solve an example afaik. The post or the sequence may be the quickest path to getting your hands dirty and trying this stuff out, though I don't think the proposal will end up being the right answer.

There's some literature on modeling a function as a stochastic process, which gives you probability distributions over its values. The information in these distributions comes from calculations of a few values of the function. One application is in optimizing a difficult-to-evaluate objective function: see Efficient Global Optimization of Expensive Black-Box Functions, by Donald R. Jones, Matthias Schonlau, and William J. Welch. Another is when you're doing simulations that have free parameters, and you want to make sure you try all the relevant combinations of parameter values: see Design and Analysis of Computer Experiments by Jerome Sacks, William J. Welch, Toby J. Mitchell, and Henry P. Wynn.

Maximize Worst Case Bayes Score, by Coscott, addresses the question: "Given a consistent but incomplete theory, how should one choose a random model of that theory?"

Bayesian Networks for Logical Reasoning by Jon Williamson. Looks interesting, but I can't summarize it because I don't understand it.

And, a big one that I'm still working through: Non-Omniscience, Probabilistic Inference, and Metamathematics, by Paul Christiano. Very thorough, goes all the way from trying to define coherent belief to trying to build usable algorithms for assigning probabilities.

Dealing With Logical Omniscience: Expressiveness and Pragmatics, by Joseph Y. Halpern and Riccardo Pucella.

Reasoning About Rational, But Not Logically Omniscient Agents, by Ho Ngoc Duc. Sorry about the paywall.

And then the references from Christiano's report:

Abram Demski. Logical prior probability. In Joscha Bach, Ben Goertzel, and Matthew Ikle, editors, AGI, volume 7716 of Lecture Notes in Computer Science, pages 50-59. Springer, 2012.

Marcus Hutter, John W. Lloyd, Kee Siong Ng, and William T. B. Uther. Probabilities on sentences in an expressive logic. CoRR, abs/1209.2620, 2012.

Bas R. Steunebrink and Jurgen Schmidhuber. A family of Godel machine implementations. In Jurgen Schmidhuber, Kristinn R. Thorisson, and Moshe Looks, editors, AGI, volume 6830 of Lecture Notes in Computer Science, pages 275{280. Springer, 2011.

If you have any more links, post them!

Or if you can contribute summaries.

Upcoming CFAR events: Lower-cost bay area intro workshop; EU workshops; and others

17 AnnaSalamon 02 October 2014 12:08AM

For anyone who's interested:

CFAR is trying out an experimental, lower-cost, 1.5-day introductory workshop Oct 25-26 in the bay area.  It is meant to provide an easier point of entry into our rationality training.  If you've been thinking about coming to a CFAR workshop but have had trouble setting aside 4 days and $3900, you might consider trying this out.  (Or, if you have a friend or family member in that situaiton, you might suggest this to them.)  It's a beta test, so no guarantees as to the outcome -- but I suspect it'll be both useful, and a lot of fun.

We are also finally making it to Europe.  We'll be running two workshops in the UK this November, both of which have both space and financial aid still available.

We're also still running our standard workshops: Jan 16-19 in Berkeley, and April 23-26 in Boston, MA.  (We're experimenting, also, with using alumni "TA's" to increase the amount of 1-on-1 informal instruction while simultaneously increasing workshop size, in an effort to scale our impact.)

Finally, we're actually running a bunch of events lately for alumni of our 4-day workshops (a weekly rationality dojo; a bimonthly colloquium; a yearly alumni reunion; and various for-alumni workshops); which is perhaps less exciting if you aren't yet an alumnus, but which I'm very excited about because it suggests that we'll have a larger community of people doing serious practice, and thereby pushing the boundaries of the art of rationality.

If anyone wishes to discuss any of these events, or CFAR's strategy as a whole, I'd be glad to talk; you can book me here.

Cheers!

Is the potential astronomical waste in our universe too small to care about?

16 Wei_Dai 21 October 2014 08:44AM

In the not too distant past, people thought that our universe might be capable of supporting an unlimited amount of computation. Today our best guess at the cosmology of our universe is that it stops being able to support any kind of life or deliberate computation after a finite amount of time, during which only a finite amount of computation can be done (on the order of something like 10^120 operations).

Consider two hypothetical people, Tom, a total utilitarian with a near zero discount rate, and Eve, an egoist with a relatively high discount rate, a few years ago when they thought there was .5 probability the universe could support doing at least 3^^^3 ops and .5 probability the universe could only support 10^120 ops. (These numbers are obviously made up for convenience and illustration.) It would have been mutually beneficial for these two people to make a deal: if it turns out that the universe can only support 10^120 ops, then Tom will give everything he owns to Eve, which happens to be $1 million, but if it turns out the universe can support 3^^^3 ops, then Eve will give $100,000 to Tom. (This may seem like a lopsided deal, but Tom is happy to take it since the potential utility of a universe that can do 3^^^3 ops is so great for him that he really wants any additional resources he can get in order to help increase the probability of a positive Singularity in that universe.)

You and I are not total utilitarians or egoists, but instead are people with moral uncertainty. Nick Bostrom and Toby Ord proposed the Parliamentary Model for dealing with moral uncertainty, which works as follows:

Suppose that you have a set of mutually exclusive moral theories, and that you assign each of these some probability.  Now imagine that each of these theories gets to send some number of delegates to The Parliament.  The number of delegates each theory gets to send is proportional to the probability of the theory.  Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting.  What you should do is act according to the decisions of this imaginary Parliament.

It occurred to me recently that in such a Parliament, the delegates would makes deals similar to the one between Tom and Eve above, where they would trade their votes/support in one kind of universe for votes/support in another kind of universe. If I had a Moral Parliament active back when I thought there was a good chance the universe could support unlimited computation, all the delegates that really care about astronomical waste would have traded away their votes in the kind of universe where we actually seem to live for votes in universes with a lot more potential astronomical waste. So today my Moral Parliament would be effectively controlled by delegates that care little about astronomical waste.

I actually still seem to care about astronomical waste (even if I pretend that I was certain that the universe could only do at most 10^120 operations). (Either my Moral Parliament wasn't active back then, or my delegates weren't smart enough to make the appropriate deals.) Should I nevertheless follow UDT-like reasoning and conclude that I should act as if they had made such deals, and therefore I should stop caring about the relatively small amount of astronomical waste that could occur in our universe? If the answer to this question is "no", what about the future going forward, given that there is still uncertainty about cosmology and the nature of physical computation. Should the delegates to my Moral Parliament be making these kinds of deals from now on?

Fighting Mosquitos

16 ChristianKl 16 October 2014 11:53AM

According to Louie Helm eradicating a species of mosquitoes could be done for as little as a few million dollar.

I don't have a few million dollar lying around so I can't spend my own money to do it. On the other hand, I think that on average every German citizen would be quite willing to pay 1€ per year to rid Germany of mosquitoes that bite humans.

That means it's a problem of public action. The German government should spend 80 million Euro to rid Germany of Mosquitos. That's an order of magnitude higher than the numbers quoted by Louie Helm).

The same goes basically for every country or state with mosquitos.

How could we get a government to do this without spending too much money ourselves? The straight forward way is writing a petition. We could host a website and simultaneously post a petition to every relevant parliament on earth.

How do we get attention for the petition? Facebook. People don't like Mosquitos and should be willing to file an internet petition to get rid of them. I would believe this to spread virally. The idea seems interesting enough to get journalists to write articles about it. 

Bonus points:

After we have eradicated human biting mosquitoes from our homelands it's quite straightforward to export the technology to Africa. 

Does anyone see any issues with that plan?

Contrarian LW views and their economic implications

16 Larks 08 October 2014 11:48PM

LW readers have unusual views on many subjects. Efficient Market Hypothesis notwithstanding, many of these are probably alien to most people in finance. So it's plausible they might have implications that are not yet fully integrated into current asset prices. And if you rightfully believe something that most people do not believe, you should be able to make money off that.

 

Here's an example for a different group. Feminists believe that women are paid less than men for no good economic reason. If this is the case, feminists should invest in companies that hire many women, and short those which hire few women, to take advantage of the cheaper labour costs. And I can think of examples for groups like Socialists, Neoreactionaries, etc. - cases where their positive beliefs have strong implications for economic predictions. But I struggle to think of such ones for LessWrong, which is why I am asking you. Can you think of any unusual LW-type beliefs that have strong economic implications (say over the next 1-3 years)?

 

Wei Dai has previously commented on a similar phenomena, but I'm interested in a wider class of phenomena.

 

edit: formatting

Stupid Questions (10/27/2014)

14 drethelin 27 October 2014 09:27PM

I think it's past time for another Stupid Questions thread, so here we go. 

 

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Please respect people trying to fix any ignorance they might have, rather than mocking that ignorance. 

 

What math is essential to the art of rationality?

14 Capla 15 October 2014 02:44AM

I have started to put together a sort of curriculum for learning the subjects that lend themselves to rationality. It includes things like experimental methodology and cognitive psychology (obviously), along with "support disciplines" like computer science and economics. I think (though maybe I'm wrong) that mathematics is one of the most important things to understand.

Eliezer said in the simple math of everything:

It seems to me that there's a substantial advantage in knowing the drop-dead basic fundamental embarrassingly simple mathematics in as many different subjects as you can manage.  Not, necessarily, the high-falutin' complicated damn math that appears in the latest journal articles.  Not unless you plan to become a professional in the field.  But for people who can read calculus, and sometimes just plain algebra, the drop-dead basic mathematics of a field may not take that long to learn.  And it's likely to change your outlook on life more than the math-free popularizations or the highly technical math.

I want to have access to outlook-changing insights. So, what math do I need to know? What are the generally applicable mathematical principles that are most worth learning? The above quote seems to indicate at least calculus, and everyone is a fan of Bayesian statistics (which I know little about). 

Secondarily, what are some of the most important of that "drop-dead basic fundamental embarrassingly simple mathematics" from different fields? What fields are mathematically based, other than physics and evolutionary biology, and economics?

What is the most important math for an educated person to be familiar with?

As someone who took an honors calculus class in high school, liked it, and did alright in the class, but who has probably forgotten most of it by now and needs to relearn it, how should I go about learning that math?

Cryonics in Europe?

14 roland 10 October 2014 02:58PM

What are the best options for cryonics in Europe?

AFAIK the best option is still to use one of the US providers(e.g. Alcor) and arrange for transportation. There is a problem with this though, in that until you arrive in the US your body will be cooled with dry ice which will cause huge ischemic damage.

Questions:

  1. How critical is the ischemic damage? If I interpret this comment by Eliezer correctly we shouldn't worry about this damage if we consider future technology.
  2. Is there a way to have adequate cooling here in Europe until you arrive at the US for final storage?

There is also KrioRus, a Russian cryonics company, they seem to offer an option of cryo transportation but I don't know how trustworthy they are.

Happiness Logging: One Year In

14 jkaufman 09 October 2014 07:24PM

I've been logging my happiness for a year now. [1] My phone notifies me at unpredictable intervals, and I respond with some tags. For example, if it pinged me now, I would enter "6 home bed computer blog". I always have a numeric tag for my current happiness, and then additional tags for where I am, what I'm doing, and who I'm with. So: what's working, what's not?

When I first started rating my happiness on a 1-10 scale I didn't feel like I was very good at it. At the time I thought I might get better with practice, but I think I'm actually getting worse at it. Instead of really thinking "how do I feel right now?" it's really hard not to just think "in past situations like this I've put down '6' so I should put down '6' now".

Being honest to myself like this can also make me less happy. Normally if I'm negative about something I try not to dwell on it. I don't think about it, and soon I'm thinking about other things and not so negative. Logging that I'm unhappy makes me own up to being unhappy, which I think doesn't help. Though it's hard to know because any other sort of measurement would seem to have the same problem.

There's also a sampling issue. I don't have my phone ping me during the night, because I don't want it to wake me up. Before having a kid this worked properly: I'd plug in my phone, which turns off pings, promptly fall asleep, wake up in the morning, unplug my phone. Now, though, my sleep is generally interrupted several times a night. Time spent waiting to see if the baby falls back asleep on her own, or soothing her back to sleep if she doesn't, or lying awake at 4am because it's hard to fall back asleep when you've had 7hr and just spent an hour walking around and bouncing the baby; none of these are counted. On the whole, these experiences are much less enjoyable than my average; if the baby started sleeping through the night such that none of these were needed anymore I wouldn't see that as a loss at all. Which means my data is biased upward. I'm curious how happiness sampling studies have handled this; people with insomnia would be in a similar situation.

Another sampling issue is that I don't always notice when I get a ping. For the brief period when I was wearing a smartwatch I was consistently noticing all my pings but now I'm back to where I sometimes miss the vibration. I usually fill out these pings retroactively if it's only been a few minutes and I'm confident that I remember how I felt and what I was doing. I haven't been tagging these pings separately, but now that I think of it I'm going to add an "r" tag for retroactive responses.

Responding to pings when other people are around can also be tricky. For a while there were some people who would try and peek and see what I was writing, and I wasn't sure whether I should let them see. I ended up deciding that while having all the data eventally end up public was fine, filling it out in the moment needed to be private so I wouldn't be swayed by wanting to indicate things to the people around me.

The app I'm using isn't perfect, but it's pretty good. Entering new tags is a little annoying, and every time I back up the pings it forgets my past tags. The manual backup step also led to some missing data—all of September 2014 and some of August—because my phone died. This logging data is the only thing on my phone that isn't automatically backed up to the cloud, so when my phone died a few weeks ago I lost the last month of pings. [2] So now there's a gap in the graph.

While I'm not that confident in my numeric reports, I'm much more confident in the other tags that indicate what I'm doing at various times. If I'm on the computer I very reliably tag 'computer', etc. I haven't figured out what to do with this data yet, but it should be interesting for tracking behavior chages over time. One thing I remember doing is switching from wasting time on my computer to on my phone; let's see what that looked like:

I don't remember why the big drop in computer use at the end of February 2014 happened. I assumed at first it was having a baby, after which I spent a lot of time reading on my phone while she was curled up on me, but that wasn't until a month later. I think this may have been when I realized that I didn't hate the facebook app on my phone afterall? I'm not sure. The second drop in both phone- and computer-based timewasting, the temporary one in July 2014, was my being in England. My phone had internet but my computer usually didn't. And there was generally much more interesting stuff going on around me than my phone.

Overall my experience with logging has made me put less trust in "how happy are you right now" surveys of happiness. Aside from the practical issues like logging unexpected night wake-time, I mostly don't feel like the numbers I'm recording are very meaningful. I would rather spend more time in situations I label higher than lower on average, so there is some signal there, but I don't actually have the introspection to accurately report to myself how I'm feeling.

I also posted this on my blog.


[1] First ping was 2013.10.08 06:31:41, a year ago yesterday.

[2] Well, it was more my fault than that. The phone was partly working and I did a factory reset to see if that would fix it (it didn't) and I forgot to back up pings first.

Things to consider when optimizing: Sleep

13 mushroom 28 October 2014 05:26PM

I'd like to have a series of discussion posts, where each post is of the form "Let's brainstorm things you might consider when optimizing X", where X is something like sleep, exercise, commuting, studying, etc. Think of it like a specialized repository.

In the spirit of try more things, the direct benefit is to provide insights like "Oh, I never realized that BLAH is a knob I can fiddle. This gives me an idea of how I might change BLAH given my particular circumstances. I will try this and see what happens!"

The indirect benefit is to practice instrumental rationality using the "toy problem" provided by a general prompt.

Accordingly, participation could be in many forms:

* Pointers to scientific research
* General directions to consider
* Personal experience
* Boring advice
* Intersections with other community ideas, biases
* Cost-benefit, value-of-information analysis
* Related questions
* Other musings, thoughts, speculation, links, theories, etc.

This post is on sleep and circadian rhythms.

[Link] "The Problem With Positive Thinking"

13 CronoDAS 26 October 2014 06:50AM

Psychology researchers discuss their findings in a New York Times op-ed piece.

The take-home advice:

Positive thinking fools our minds into perceiving that we’ve already attained our goal, slackening our readiness to pursue it.

...

What does work better is a hybrid approach that combines positive thinking with “realism.” Here’s how it works. Think of a wish. For a few minutes, imagine the wish coming true, letting your mind wander and drift where it will. Then shift gears. Spend a few more minutes imagining the obstacles that stand in the way of realizing your wish.

This simple process, which my colleagues and I call “mental contrasting,” has produced powerful results in laboratory experiments. When participants have performed mental contrasting with reasonable, potentially attainable wishes, they have come away more energized and achieved better results compared with participants who either positively fantasized or dwelt on the obstacles.

When participants have performed mental contrasting with wishes that are not reasonable or attainable, they have disengaged more from these wishes. Mental contrasting spurs us on when it makes sense to pursue a wish, and lets us abandon wishes more readily when it doesn’t, so that we can go after other, more reasonable ambitions.

question: the 40 hour work week vs Silicon Valley?

12 Florian_Dietz 24 October 2014 12:09PM

Conventional wisdom, and many studies, hold that 40 hours of work per week are the optimum before exhaustion starts dragging your productivity down too much to be worth it. I read elsewhere that the optimum is even lower for creative work, namely 35 hours per week, though the sources I found don't all seem to agree.

In contrast, many tech companies in silicon valley demand (or 'encourage', which is the same thing in practice) much higher work times. 70 or 80 hours per week are sometimes treated as normal.

How can this be?

Are these companies simply wrong and are actually hurting themselves by overextending their human resources? Or does the 40-hour week have exceptions?

How high is the variance in how much time people can work? If only outliers are hired by such companies, that would explain the discrepancy. Another possibility is that this 40 hour limit simply does not apply if you are really into your work and 'in the flow'. However, as far as I understand it, the problem is a question of concentration, not motivation, so that doesn't make sense.

There are many articles on the internet arguing for both sides, but I find it hard to find ones that actually address these questions instead of just parroting the same generalized responses every time: Proponents of the 40 hour week cite studies that do not consider special cases, only averages (at least as far as I could find). Proponents of the 80 hour week claim that low work weeks are only for wage slaves without motivation, which reeks of bias and completely ignores that one's own subjective estimate of one's performance is not necessarily representative of one's actual performance.

Do you know of any studies that address these issues?

What supplements do you take, if any?

12 NancyLebovitz 23 October 2014 12:36PM

Since it turns out that it isn't feasible to include check as many as apply questions in the big survey, I'm asking about supplements here. I've got a bunch of questions, and I don't mind at all if you just answer some of them.

What supplements do you take? At what dosages? Are there other considerations, like with/without food or time of day?

Are there supplements you've stopped using?

How did you decide to take the supplements you're using? How do you decide whether to continue taking them?

Do you have preferred suppliers? How did you choose them?

One Year of Goodsearching

11 katydee 21 October 2014 01:09AM

Followup to: Use Search Engines Early and Often

Last year, I posted about using search engines and particularly recommended GoodSearch, a site that donates one cent to a charity of your choice whenever you make a (Bing-powered) search via their site.

At the time, some seemed skeptical of this recommendation, and my post was actually downvoted-- people thought that I was plugging GoodSearch too hard without enough evidence for its quality. I now want to return to the topic with a more detailed report on my experience using GoodSearch for a year and how that has worked out for me.

What is GoodSearch?

GoodSearch is a site that donates one cent to a charity of your choice whenever you make a search using their (Bing-powered) service. You can set this search to operate in your browser just like any other.

GoodSearch for Charity

During a year of using GoodSearch, I raised $103.00 for MIRI through making searches. This number is not particularly huge in itself, but it is meaningful because this was basically "free money"-- money gained in exchange for doing things that I was already doing. In exchange for spending ~10 minutes reconfiguring my default searches and occasionally logging in to GoodSearch, I made 103 dollars for MIRI-- approximately $600/hour. As my current earning potential is less than $600/hour, I consider adopting GoodSearch a highly efficient method of donating to charity, at least for me.

It is possible that you make many fewer searches than I do, and thus that setting up GoodSearch will not be very effective for you at raising money. Indeed, I think this is at least somewhat likely, as last time I checked owever, there are two mitigating factors here:

First, you don't have to make all that many searches for GoodSearch to be a good idea. If you make a tenth of the searches I do in a year, you would still be earning around $60/hour for charity by configuring GoodSearch for ten minutes.

Second, I anticipate that, having created a GoodSearch account and configured my default settings to use GoodSearch, I have accomplished the bulk of this task, and that next year I will spend significantly less time setting up GoodSearch-- perhaps half that, if not less. This means that my projected returns on using GoodSearch next year are $1200/hour! If this holds true for you as well, even if setting up GoodSearch is marginal now, it could well be worth it later.

It is also of course possible that you will make many more searches than I do, and thus that setting up GoodSearch will be even more effective for you than it is for me. I think this is somewhat unlikely, as I consider myself rather good at using search engines and quick to use them to resolve problems, but I would love to be proven wrong.

GoodSearch for Personal Effectiveness

Perhaps more importantly, though, I found that using GoodSearch was a very effective way of getting me to search more often. I had previously identified not using search engines as often as I could as a weakness that was causing me to handle some matters inefficiently. In general, there are many situations where the value of information that can be obtained by using search engines is high, but one may not be inclined to search immediately.

For me, using GoodSearch solved this problem; while a single cent to MIRI for each search doesn't seem like much, it was enough to give me a little ping of happiness every time I searched for anything, which in turn was enough to reinforce my searching habit and take things to the next level. GoodSearch essentially created a success spiral that led to me using both search engines and the Internet itself much more effectively.

Disavantages of GoodSearch

GoodSearch has one notable disadvantage-- it is powered by Bing rather than by Google search. When I first tried GoodSearch, I expected search quality to be much worse. In practice, though, I found that my fears were overblown. GoodSearch results were completely fine in almost all cases, and in the few situations where it proved insufficient, I could easily retry a search in Google-- though often Google too lacked the information I was looking for.

If you are a Google search "power user" (if you don't know if you are, you probably aren't), GoodSearch may not work well for you, as you will be accustomed to using methods that may no longer apply.

Summary/tl;dr

After a year of using GoodSearch, I found it to be both an effective way to earn money for charity and an effective way to motivate myself to use search engines more often. I suggest that other users try using GoodSearch and seeing if it has similarly positive effects; the costs of trying this are very low and the potential upside is high.

Superintelligence 5: Forms of Superintelligence

12 KatjaGrace 14 October 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the fifth section in the reading guideForms of superintelligence. This corresponds to Chapter 3, on different ways in which an intelligence can be super.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 3 (p52-61)


Summary

  1. A speed superintelligence could do what a human does, but faster. This would make the outside world seem very slow to it. It might cope with this partially by being very tiny, or virtual. (p53)
  2. A collective superintelligence is composed of smaller intellects, interacting in some way. It is especially good at tasks that can be broken into parts and completed in parallel. It can be improved by adding more smaller intellects, or by organizing them better. (p54)
  3. A quality superintelligence can carry out intellectual tasks that humans just can't in practice, without necessarily being better or faster at the things humans can do. This can be understood by analogy with the difference between other animals and humans, or the difference between humans with and without certain cognitive capabilities. (p56-7)
  4. These different kinds of superintelligence are especially good at different kinds of tasks. We might say they have different 'direct reach'. Ultimately they could all lead to one another, so can indirectly carry out the same tasks. We might say their 'indirect reach' is the same. (p58-9)
  5. We don't know how smart it is possible for a biological or a synthetic intelligence to be. Nonetheless we can be confident that synthetic entities can be much more intelligent than biological entities
    1. Digital intelligences would have better hardware: they would be made of components ten million times faster than neurons; the components could communicate about two million times faster than neurons can; they could use many more components while our brains are constrained to our skulls; it looks like better memory should be feasible; and they could be built to be more reliable, long-lasting, flexible, and well suited to their environment.
    2. Digital intelligences would have better software: they could be cheaply and non-destructively 'edited'; they could be duplicated arbitrarily; they could have well aligned goals as a result of this duplication; they could share memories (at least for some forms of AI); and they could have powerful dedicated software (like our vision system) for domains where we have to rely on slow general reasoning.

Notes

  1. This chapter is about different kinds of superintelligent entities that could exist. I like to think about the closely related question, 'what kinds of better can intelligence be?' You can be a better baker if you can bake a cake faster, or bake more cakes, or bake better cakes. Similarly, a system can become more intelligent if it can do the same intelligent things faster, or if it does things that are qualitatively more intelligent. (Collective intelligence seems somewhat different, in that it appears to be a means to be faster or able to do better things, though it may have benefits in dimensions I'm not thinking of.) I think the chapter is getting at different ways intelligence can be better rather than 'forms' in general, which might vary on many other dimensions (e.g. emulation vs AI, goal directed vs. reflexive, nice vs. nasty).
  2. Some of the hardware and software advantages mentioned would be pretty transformative on their own. If you haven't before, consider taking a moment to think about what the world would be like if people could be cheaply and perfectly replicated, with their skills intact. Or if people could live arbitrarily long by replacing worn components. 
  3. The main differences between increasing intelligence of a system via speed and via collectiveness seem to be: (1) the 'collective' route requires that you can break up the task into parallelizable subtasks, (2) it generally has larger costs from communication between those subparts, and (3) it can't produce a single unit as fast as a comparable 'speed-based' system. This suggests that anything a collective intelligence can do, a comparable speed intelligence can do at least as well. One counterexample to this I can think of is that often groups include people with a diversity of knowledge and approaches, and so the group can do a lot more productive thinking than a single person could. It seems wrong to count this as a virtue of collective intelligence in general however, since you could also have a single fast system with varied approaches at different times.
  4. For each task, we can think of curves for how performance increases as we increase intelligence in these different ways. For instance, take the task of finding a fact on the internet quickly. It seems to me that a person who ran at 10x speed would get the figure 10x faster. Ten times as many people working in parallel would do it only a bit faster than one, depending on the variance of their individual performance, and whether they found some clever way to complement each other. It's not obvious how to multiply qualitative intelligence by a particular factor, especially as there are different ways to improve the quality of a system. It also seems non-obvious to me how search speed would scale with a particular measure such as IQ. 
  5. How much more intelligent do human systems get as we add more humans? I can't find much of an answer, but people have investigated the effect of things like team sizecity size, and scientific collaboration on various measures of productivity.
  6. The things we might think of as collective intelligences - e.g. companies, governments, academic fields - seem notable to me for being slow-moving, relative to their components. If someone were to steal some chewing gum from Target, Target can respond in the sense that an employee can try to stop them. And this is no slower than an individual human acting to stop their chewing gum from being taken. However it also doesn't involve any extra problem-solving from the organization - to the extent that the organization's intelligence goes into the issue, it has to have already done the thinking ahead of time. Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Produce improved measures of (substrate-independent) general intelligence. Build on the ideas of Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc. Differentiate intelligence quality from speed.
  2. List some feasible but non-realized cognitive talents for humans, and explore what could be achieved if they were given to some humans.
  3. List and examine some types of problems better solved by a speed superintelligence than by a collective superintelligence, and vice versa. Also, what are the returns on “more brains applied to the problem” (collective intelligence) for various problems? If there were merely a huge number of human-level agents added to the economy, how much would it speed up economic growth, technological progress, or other relevant metrics? If there were a large number of researchers added to the field of AI, how would it change progress?
  4. How does intelligence quality improve performance on economically relevant tasks?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'intelligence explosion kinetics', a topic at the center of much contemporary debate over the arrival of machine intelligence. To prepare, read Chapter 4, The kinetics of an intelligence explosion (p62-77)The discussion will go live at 6pm Pacific time next Monday 20 October. Sign up to be notified here.

Is this paper formally modeling human (ir)rational decision making worth understanding?

11 rule_and_line 23 October 2014 10:11PM

I've found that I learn new topics best by struggling to understand a jargoney paper.  This passed through my inbox today and on the surface it appears to hit a lot of high notes.

Since I'm not an expert, I have no idea if this has any depth to it.  Hivemind thoughts?

Modeling Human Decision Making using Extended Behavior Networks, Klaus Dorer

(Note: I'm also pushing myself to post to LW instead of lurking.  If this kind of post is unwelcome, I'm happy to hear that feedback.)

Blackmail, continued: communal blackmail, uncoordinated responses

11 Stuart_Armstrong 22 October 2014 05:53PM

The heuristic that one should always resist blackmail seems a good one (no matter how tricky blackmail is to define). And one should be public about this, too; then, one is very unlikely to be blackmailed. Even if one speaks like an emperor.

But there's a subtlety: what if the blackmail is being used against a whole group, not just against one person? The US justice system is often seen to function like this: prosecutors pile on ridiculous numbers charges, threatening uncounted millennia in jail, in order to get the accused to settle for a lesser charge and avoid the expenses of a trial.

But for this to work, they need to occasionally find someone who rejects the offer, put them on trial, and slap them with a ridiculous sentence. Therefore by standing up to them (or proclaiming in advance that you will reject such offers), you are not actually making yourself immune to their threats. Your setting yourself up to be the sacrificial one made an example of.

Of course, if everyone were a UDT agent, the correct decision would be for everyone to reject the threat. That would ensure that the threats are never made in the first place. But - and apologies if this shocks you - not everyone in the world is a perfect UDT agent. So the threats will get made, and those resisting them will get slammed to the maximum.

Of course, if everyone could read everyone's mind and was perfectly rational, then they would realise that making examples of UDT agents wouldn't affect the behaviour of non-UDT agents. In that case, UDT agents should resist the threats, and the perfectly rational prosecutor wouldn't bother threatening UDT agents. However - and sorry to shock your views of reality three times in one post - not everyone is perfectly rational. And not everyone can read everyone's minds.

So even a perfect UDT agent must, it seems, sometimes succumb to blackmail.

Four things every community should do

11 Gunnar_Zarncke 20 October 2014 05:24PM

Yesterday I attended church service in Romania where I had visited my sister and the sermon was about the four things a (christian) community has to follow to persevere and grow. 

I first considered just posting the quote from the Acts of the Apostles (reproduced below) in the Rationality Quotes Thread but I fear without explanation the inferential gap of the quote is too large.

The LessWrong Meetups, the EA community and other rationalist communities probably can learn from the experience of long established orders (I once asked for lessons from free masonry). 

So I drew the following connections:

According to the the sermon and the below verse the four pillars of a christian community are:

 

  1. Some canon of scripture which for LW might be compared to the sequences. I'm not clear what the pendant for EA is.
  2. Taking part in a closely knit community. Coming together regularly (weekly I guess is optimal).
  3. Eat together and have rites/customs together (this is also emphasized in the LW Meetup flyer).
  4. Praying together. I think praying could be generalized to talking and thinking about the scripture by oneself and together. Prayer also has a component of daily reflection of achievements, problems, wishes.

 

Other analogies that I drew from the quote:

 

  • Verse 44 describes behaviour also found in communes.
  • Verse 45 sounds a lot like EA teachings if you generalize it.
  • Verse 47 the last sentence could be interpreted to indicate exponential growth as a result of these teachings.
  • The verses also seem to imply some reachout by positive example.

 

And what I just right now notice is that embedding the rules in the scripture is essentially self-reference. As the scripture is canon this structure perpetuates itself. Clearly a meme that ensures its reproduction.

Does this sound convincing and plausible or did I fell trap to some bias in (over)interpreting the sermon?

I hope this is upvoted for the lessons we might draw from this - despite the quote clearly being theistic in origin.

continue reading »

[Link]"Neural Turing Machines"

10 Prankster 31 October 2014 08:54AM

The paper.

Discusses the technical aspects of one of Googles AI projects. According to a pcworld the system "apes human memory and programming skills" (this article seems pretty solid, also contains link to the paper). 

The abstract:

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

 

(First post here, feedback on the appropriateness of the post appreciated)

LW Supplement use survey

10 FiftyTwo 28 October 2014 09:28PM

I've put together a very basic survey using google forms inspired by NancyLebovitz recent discussion post on supplement use 

Survey includes options for "other" and "do not use supplements." Results are anonymous and you can view all the results once you have filled it in, or use this link

 

Link to the Survey

What is optimization power, formally?

10 sbenthall 18 October 2014 06:37PM

I'm interested in thinking formally about AI risk. I believe that a proper mathematization of the problem is important to making intellectual progress in that area.

I have been trying to understand the rather critical notion of optimization power. I was hoping that I could find a clear definition in Bostrom's Superintelligence. But having looked in the index at all the references to optimization power that it mentions, as far as I can tell he defines it nowhere. The closest he gets is defining it in terms of rate of change and recalcitrance (pp.62-77). This is an empty definition--just tautologically defining it in terms of other equally vague terms.

Looking around, this post by Yudkowksy, "Measuring Optimization Power" doesn't directly formalize optimization power. He does discuss how one would predict or identify if a system were the result of an optimization process in a Bayesian way:

The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities.  To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around.  This plugs directly into Bayesian updating: it says that highly optimized events are strong evidence for optimization processes that produce them.

This is not, however, a definition that can be used to help identify the pace of AI development, for example. Rather, it is just an expression of how one would infer anything in a Bayesian way, applied to the vague 'optimization process' phenomenon.

Alex Altair has a promising attempt at formalization here but it looks inconclusive. He points out the difficulty of identifying optimization power with just the shift in the probability mass of utility according to some utility function. I may be misunderstanding, but my gloss on this is that defining optimization power purely in terms of differences in probability of utility doesn't say anything substantive about how a process has power. Which is important it is going to be related to some other concept like recalcitrance in a useful way. 

Has there been any further progress in this area?

It's notable that this discussion makes zero references to computational complexity, formally or otherwise. That's notable because the informal discussion about 'optimization power' is about speed and capacity to compute--whether it be brains, chips, or whatever. There is a very well-developed formal theory of computational complexity that's at the heart of contemporary statistical learning theory. I would think that the tools for specifying optimization power would be in there somewhere.

Those of you interested in the historical literature on this sort of thing may be interested in cyberneticist's Rosenblueth, Weiner, and Bigelow's 1943 paper "Behavior, Purpose and Teleology", one of the first papers to discuss machine 'purpose', which they associate with optimization but in the particular sense of a process that is driven by a negative feedback loop as it approaches its goal. That does not exactly square with an 'explosively' teleology. This is one indicator that explosively purposeful machines might be quite rare or bizarre. In general, the 20th century cybernetics movement has a lot in common with contemporary AI research community. Which is interesting, because its literature is rarely directly referenced. I wonder why.

Baysian conundrum

10 Jan_Rzymkowski 13 October 2014 12:39AM

For some time I've been pondering on a certain scenario, which I'll describe shortly. I hope you may help me find a satisfactory answer or at very least be as perplexed by this probabilistic question as me. Feel free to assign any reasonable a priori probabilities as you like. Here's the problem:

It's cold cold winter. Radiators are hardly working, but it's not why you're sitting so anxiously in your chair. The real reason is that tomorrow is your assigned upload (and damn, it's just one in million chance you're not gonna get it) and you just can't wait to leave your corporality behind. "Oh, I'm so sick of having a body, especially now. I'm freezing!" you think to yourself, "I wish I were already uploaded and could just pop myself off to a tropical island."

And now it strikes you. It's a weird solution, but it feels so appealing. You make a solemn oath (you'd say one in million chance you'd break it), that soon after upload you will simulate this exact moment thousand times simultaneously and when the clock strikes 11 AM, you're gonna be transposed to a Hawaiian beach, with a fancy drink in your hand.

It's 10:59 on a clock. What's the probability that you'd be in a tropical paradise in one minute?

And to make things more paradoxical: What would be said probability, if you wouldn't have made such an oath - just seconds ago?

Superintelligence 6: Intelligence explosion kinetics

9 KatjaGrace 21 October 2014 01:00AM

This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.


Welcome. This week we discuss the sixth section in the reading guideIntelligence explosion kinetics. This corresponds to Chapter 4 in the book, of a similar name. This section is about how fast a human-level artificial intelligence might become superintelligent.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: Chapter 4 (p62-77)


Summary

  1. Question: If and when a human-level general machine intelligence is developed, how long will it be from then until a machine becomes radically superintelligent? (p62)
  2. The following figure from p63 illustrates some important features in Bostrom's model of the growth of machine intelligence. He envisages machine intelligence passing human-level, then at some point reaching the level where most inputs to further intelligence growth come from the AI itself ('crossover'), then passing the level where a single AI system is as capable as all of human civilization, then reaching 'strong superintelligence'. The shape of the curve is probably intended an example rather than a prediction.
  3. A transition from human-level machine intelligence to superintelligence might be categorized into one of three scenarios: 'slow takeoff' takes decades or centuries, 'moderate takeoff' takes months or years and 'fast takeoff' takes minutes to days. Which scenario occurs has implications for the kinds of responses that might be feasible.
  4. We can model improvement in a system's intelligence with this equation:

    Rate of change in intelligence = Optimization power/Recalcitrance

    where 'optimization power' is effort being applied to the problem, and 'recalcitrance' is how hard it is to make the system smarter by applying effort.
  5. Bostrom's comments on recalcitrance of different methods of increasing kinds of intelligence:
    1. Cognitive enhancement via public health and diet: steeply diminishing returns (i.e. increasing recalcitrance)
    2. Pharmacological enhancers: diminishing returns, but perhaps there are still some easy wins because it hasn't had a lot of attention.
    3. Genetic cognitive enhancement: U-shaped recalcitrance - improvement will become easier as methods improve, but then returns will decline. Overall rates of growth are limited by maturation taking time.
    4. Networks and organizations: for organizations as a whole recalcitrance is high. A vast amount of effort is spent on this, and the world only becomes around a couple of percent more productive per year. The internet may have merely moderate recalcitrance, but this will likely increase as low-hanging fruits are depleted.
    5. Whole brain emulation: recalcitrance is hard to evaluate, but emulation of an insect will make the path much clearer. After human-level emulations arrive, recalcitrance will probably fall, e.g. because software manipulation techniques will replace physical-capital intensive scanning and image interpretation efforts as the primary ways to improve the intelligence of the system. Also there will be new opportunities for organizing the new creatures. Eventually diminishing returns will set in for these things. Restrictive regulations might increase recalcitrance.
    6. AI algorithms: recalcitrance is hard to judge. It could be very low if a single last key insight is discovered when much else is ready. Overall recalcitrance may drop abruptly if a low-recalcitrance system moves out ahead of higher recalcitrance systems as the most effective method for solving certain problems. We might overestimate the recalcitrance of sub-human systems in general if we see them all as just 'stupid'.
    7. AI 'content': recalcitrance might be very low because of the content already produced by human civilization, e.g. a smart AI might read the whole internet fast, and so become much better.
    8. Hardware (for AI or uploads): potentially low recalcitrance. A project might be scaled up by orders of magnitude by just purchasing more hardware. In the longer run, hardware tends to improve according to Moore's law, and the installed capacity might grow quickly if prices rise due to a demand spike from AI.
  6. Optimization power will probably increase after AI reaches human-level, because its newfound capabilities will attract interest and investment.
  7. Optimization power would increase more rapidly if AI reaches the 'crossover' point, when much of the optimization power is coming from the AI itself. Because smarter machines can improve their intelligence more than less smart machines, after the crossover a 'recursive self improvement' feedback loop would kick in.
  8. Thus optimization power is likely to increase during the takeoff, and this alone could produce a fast or medium takeoff. Further, recalcitrance is likely to decline. Bostrom concludes that a fast or medium takeoff looks likely, though a slow takeoff cannot be excluded.

Notes

1. The argument for a relatively fast takeoff is one of the most controversial arguments in the book, so it deserves some thought. Here is my somewhat formalized summary of the argument as it is presented in this chapter. I personally don't think it holds, so tell me if that's because I'm failing to do it justice. The pink bits are not explicitly in the chapter, but are assumptions the argument seems to use.

  1. Growth in intelligence  =  optimization power /  recalcitrance                                                  [true by definition]
  2. Recalcitrance of AI research will probably drop or be steady when AI reaches human-level               (p68-73)
  3. Optimization power spent on AI research will increase after AI reaches human level                         (p73-77)
  4. Optimization/Recalcitrance will stay similarly high for a while prior to crossover
  5. A 'high' O/R ratio prior to crossover will produce explosive growth OR crossover is close
  6. Within minutes to years, human-level intelligence will reach crossover                                           [from 1-5]
  7. Optimization power will climb ever faster after crossover, in line with the AI's own growing capacity     (p74)
  8. Recalcitrance will not grow much between crossover and superintelligence
  9. Within minutes to years, crossover-level intelligence will reach superintelligence                           [from 7 and 8]
  10. Within minutes to years, human-level AI will likely transition to superintelligence           [from 6 and 9]

Do you find this compelling? Should I have filled out the assumptions differently?

***

2. Other takes on the fast takeoff 

It seems to me that 5 above is the most controversial point. The famous Foom Debate was a long argument between Eliezer Yudkowsky and Robin Hanson over the plausibility of fast takeoff, among other things. Their arguments were mostly about both arms of 5, as well as the likelihood of an AI taking over the world (to be discussed in a future week). The Foom Debate included a live verbal component at Jane Street Capital: blog summaryvideotranscript. Hanson more recently reviewed Superintelligence, again criticizing the plausibility of a single project quickly matching the capacity of the world.

Kevin Kelly criticizes point 5 from a different angle: he thinks that speeding up human thought can't speed up progress all that much, because progress will quickly bottleneck on slower processes.

Others have compiled lists of criticisms and debates here and here.

3. A closer look at 'crossover'

Crossover is 'a point beyond which the system's further improvement is mainly driven by the system's own actions rather than by work performed upon it by others'. Another way to put this, avoiding certain ambiguities, is 'a point at which the inputs to a project are mostly its own outputs', such that improvements to its outputs feed back into its inputs. 

The nature and location of such a point seems an interesting and important question. If you think crossover is likely to be very nearby for AI, then you need only worry about the recursive self-improvement part of the story, which kicks in after crossover. If you think it will be very hard for an AI project to produce most of its own inputs, you may want to pay more attention to the arguments about fast progress before that point.

To have a concrete picture of crossover, consider Google. Suppose Google improves their search product such that one can find a thing on the internet a radical 10% faster. This makes Google's own work more effective, because people at Google look for things on the internet sometimes. How much more effective does this make Google overall? Maybe they spend a couple of minutes a day doing Google searches, i.e. 0.5% of their work hours, for an overall saving of .05% of work time. This suggests their next improvements made at Google will be made 1.0005 faster than the last. It will take a while for this positive feedback to take off. If Google coordinated your eating and organized your thoughts and drove your car for you and so on, and then Google improved efficiency using all of those services by 10% in one go, then this might make their employees close to 10% more productive, which might produce more noticeable feedback. Then Google would have reached the crossover. This is perhaps easier to imagine for Google than other projects, yet I think still fairly hard to imagine.

Hanson talks more about this issue when he asks why the explosion argument doesn't apply to other recursive tools. He points to Douglas Englebart's ambitious proposal to use computer technologies to produce a rapidly self-improving tool set.

Below is a simple model of a project which contributes all of its own inputs, and one which begins mostly being improved by the world. They are both normalized to begin one tenth as large as the world and to grow at the same pace as each other (this is why the one with help grows slower, perhaps counterintuitively). As you can see, the project which is responsible for its own improvement takes far less time to reach its 'singularity', and is more abrupt. It starts out at crossover. The project which is helped by the world doesn't reach crossover until it passes 1. 

 

 

4. How much difference does attention and funding make to research?

Interest and investments in AI at around human-level are (naturally) hypothesized to accelerate AI development in this chapter. It would be good to have more empirical evidence on the quantitative size of such an effect. I'll start with one example, because examples are a bit costly to investigate. I selected renewable energy before I knew the results, because they come up early in the Performance Curves Database, and I thought their funding likely to have been unstable. Indeed, OECD funding since the 70s looks like this apparently:

(from here)

The steep increase in funding in the early 80s was due to President Carter's energy policies, which were related to the 1979 oil crisis.

This is what various indicators of progress in renewable energies look like (click on them to see their sources):

 

 

 

There are quite a few more at the Performance Curves Database. I see surprisingly little relationship between the funding curves and these metrics of progress. Some of them are shockingly straight. What is going on? (I haven't looked into these more than you see here).

5. Other writings on recursive self-improvement

Eliezer Yudkowsky wrote about the idea originally, e.g. here. David Chalmers investigated the topic in some detail, and Marcus Hutter did some more. More pointers here.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Model the intelligence explosion more precisely. Take inspiration from successful economic models, and evidence from a wide range of empirical areas such as evolutionary biology, technological history, algorithmic progress, and observed technological trends. Eliezer Yudkowsky has written at length about this project.
  2. Estimate empirically a specific interaction in the intelligence explosion model. For instance, how much and how quickly does investment increase in technologies that look promising? How much difference does that make to the rate of progress in the technology? How much does scaling up researchers change output in computer science? (Relevant to how much adding extra artificial AI researchers speeds up progress) How much do contemporary organizations contribute to their own inputs? (i.e. how hard would it be for a project to contribute more to its own inputs than the rest of the world put together, such that a substantial positive feedback might ensue?) Yudkowsky 2013 again has a few pointers (e.g. starting at p15).
  3. If human thought was sped up substantially, what would be the main limits to arbitrarily fast technological progress?
If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

How to proceed

This has been a collection of notes on the chapter.  The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

Next week, we will talk about 'decisive strategic advantage': the possibility of a single AI project getting huge amounts of power in an AI transition. To prepare, read Chapter 5, Decisive Strategic Advantage (p78-90)The discussion will go live at 6pm Pacific time next Monday Oct 27. Sign up to be notified here.

Improving the World

9 Viliam_Bur 10 October 2014 12:24PM

What are we doing to make this world a better (epistemically or instrumentally) place?

Some answers to this question are already written in Bragging Threads and other places, but I think they deserve a special emphasis. I think that many smart people are focused on improving themselves, which is a good thing in a long run, but sometimes the world needs some help right now. (Also, there is the failure mode of learning a lot about something, and then actually not applying that knowledge in real life.) Becoming stronger so you can create more good in the future is about the good you will create in the future; but what good are you creating right now?

 

Rules:

Top-level comments are the things you are doing right now (not merely planning to do once) to improve the world... or a part of the world... or your neighborhood... or simply any small part of the world other than only yourself.

Meta debates go under the "META" comment.

Manual for Civilization

8 RyanCarey 31 October 2014 09:36AM
I was wondering how seriously we've considered storing useful information to improve the chance of rebounding from a global catastrophe. I'm sure this has been discussed previously, but not in sufficient depth that I could find it on a short search of the site. If we value future civilisation, then, it may be worth going to significant length to reduce existential risks. Some interventions will target specific risky tech, like AI and synthetic biology. However, just as many of today's risks could not have been identified a century ago, we should expect some emerging risks of the coming decades to also catch us by surprise. As argued by Karim Jebari, even if risks are not identifiable, we can take general-purpose methods to reduce them, by analogy to the principles of robustness and safety factors in engineering. One such idea is, to create a store of the kind of items one would want to recover from catastrophe. This idea varies based on which items are chosen and where they are stored. Nick Beckstead has investigated bunkers, and he basically rejected bunker-improvement because the strength of a bunker would not improve our resilience to known risks like AI, nuclear weapons or biowarfare. However, his analysis was fairly limited in scope. He focused largely on where to put people, food and walls, in order to manage known risks. It would be useful for further analysis to consider where you can put other items, like books, batteries or 3D printers, in an analysis of a range of scenarios that could arise from known or unknown risks. Though we can't currently identify many plausible risks that would leave us without 99% of civilisation, that's still a plausible situation that it's good to equip ourselves to recover from. What information would we store? The Knowledge, How to Rebuild Civilisation From Scratch would be a good candidate based on its title alone, and a quick skim over i09's review. One could bury Wikipedia, the Internet Archive, or a bunch of other items suggested by The Long Now Foundation. A computer with a battery perhaps? Perhaps all of the above, to ward against the possibility that we miscalculate.  Where would we store it? Again, the principle of resilience would seem to dictate that we should store these in a variety of sites. They could be underground and overground, marked and unmarked at busy and deserted sites of varying climate, and with various levels of security. In general, this seems to be neglected, cheap, and unusually valuable, and so I would be interested to hear whether LessWrong has any further ideas about how this could be done well.   Further relevant reading: GCRI paper, Adaptation to and Recovery From Global Catastrophe Svalbard Global Seed Vault, a biodiversity store in the far North of Norway, Antarctica, started by Gates and others.

Link: Elon Musk wants gov't oversight for AI

8 polymathwannabe 28 October 2014 02:15AM

"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."

http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/#ftag=CAD590a51e

Podcasts?

8 Capla 25 October 2014 11:42PM

I discovered podcasts last year, and I love them! Why not be hearing about new ideas while I'm walking to where I'm going? (Some of you might shout "insight porn!", and I think that I largely agree. However, 1) I don't have any particular problem with insight porn and 2) I have frequently been exposed to an idea or been recommenced a book through a podcast, on which I latter followed up, leading to more substantive intellectual growth.)

I wonder if anyone has favorites that they might want to share with me.

I'll start:

Radiolab is, hands down, the best of all the podcasts. This seems universally recognized: I’ve yet to meet anyone who disagrees. Even the people who make other podcasts think that Radiolab is better than their own. This one regularly invokes a profound sense of wonder at the universe and gratitude for being able to appreciate it. If you missed it somehow, you're probably missing out.

The Freakonomics podcast, in my opinion, comes close to Radiolab. All the things that you thought you knew, but didn’t, and all the things you never knew you wanted to know, but do, in typical Freakonomics style. Listening to their podcast is one of the two things that makes me happy.

There’s one other podcast that I consider to be in the same league (and this one you've probably never heard of) : The Memory Palace. 5-10 minute stories form history, it is really well done. It’s all the more impressive because while Radiolab and Freakonomics are both made by professional production teams in radio studies, The Memory Palace is just some guy who makes a podcast.

Those are my three top picks (and they are the only podcasts that I listen to at “normal” speed instead of x1.5 or x2.0, since their audio production is so good).

I discovered Rationally Speaking: Exploring the Borderlands Between Reason and Nonsense recently and I’m loving it. It is my kind of skeptics podcast, investigating topics that are on the fringe but not straight out bunk (I don't need to listen to yet another podcast about how astrology doesn't work). The interplay between the hosts, Massimo (who has a PhD in Philosophy, but also one in Biology, which excuses it) and Julia (who I only just realized is a founder of the CFAR), is great.

I also sometimes enjoy the Cracked podcast. They are comedians, not philosophers or social scientists, and sometimes their lack of expertise shows (especially when they are discussing topics about which I know more than they do), but comedians often have worthwhile insights and I have been intrigued by ideas they introduced me to or gotten books at the library on their recommendation.

To what is everyone else listening?

Edit: On suggestion from several members on LessWrong I've begun listening to Hardcore History and it's companion podcast Common Sense. They're both great. I have a good knowledge of history from my school days (I liked the subject, and I seem to have strong a propensity to retain extraneous  information, particuarally information in narrative form), and Hardcore History episodes are a great refresher course, reviewing that which I'm already familiar, but from a slightly different perspective, yielding new insights and a greater connectivity of history. I think it has almost certainly supplanted the Cracked podcast as number 5 on my list.

View more: Next