Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Sept. 1-7, 2014

polymathwannabe 01 September 2014 12:18PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

[link] Large Social Networks can be Targeted for Viral Marketing with Small Seed Sets

1 Gunnar_Zarncke 01 September 2014 10:03AM

Large Social Networks can be Targeted for Viral Marketing with Small Seed Sets

It shows how easy a population can be influenced if control over a small sub-set exists.  

A key problem for viral marketers is to determine an initial "seed" set [<1% of total size] in a network such that if given a property then the entire network adopts the behavior. Here we introduce a method for quickly finding seed sets that scales to very large networks. Our approach finds a set of nodes that guarantees spreading to the entire network under the tipping model. After experimentally evaluating 31 real-world networks, we found that our approach often finds such sets that are several orders of magnitude smaller than the population size. Our approach also scales well - on a Friendster social network consisting of 5.6 million nodes and 28 million edges we found a seed sets in under 3.6 hours. We also find that highly clustered local neighborhoods and dense network-wide community structure together suppress the ability of a trend to spread under the tipping model.

This is relevant for LW because

a) Rational agents should hedge against this.

b) An UFAI could exploit this.

c) It gives hints to proof systems against this 'exploit'.

Today's Extremist "Radical" Professors vs the Old "Red Intellectuals"

-8 HalMorris 01 September 2014 03:13AM

The following is how it looks to me from a distance.  Since many readers are fairly recent college graduates, I'd be interested in other views.  I've been thinking of posing a question about "Anti-rationalism on Campus".  I suspect there is a small cadre who may say more extreme sounding things than have been said in the past, but they are incoherent, and reports of Universities as left wing robot factories are highly exagerated.

>Today's "left wing" intellectuals are blatherers. Postmodernism is anti-Enlightenment and views Marxism as an unfortunate result of the Enlightenment the same as capitalism. Noam Chomsky calls himself an anarchist. They tend to be anti-everything when it comes to actually doing something.

>There is no international Communist movement, and there's been virtually none since Brezhnev, though the USSR ran around trying to buy a lot of countries, and certainly made a lot of trouble. If you want a clear picture of the era of "Red Intellectuals", read Witness by Whittaker Chambers, and then I suggest Reds: McCarthyism in Twentieth-Century America by Ted Morgan (despite the subtitle, McCarthyism is less than half of what the book covers). Chambers was the star witness for Nixon's "pumpkin papers" trial. Both cover a lot of just how deep the international Communist movement got into America, and Chambers writes beautifully and helps you to see why that was. He also speaks for the many who became deeply disillusioned by the Hitler-Stalin pact. I used to think that was odd because in my view it was a very natural reaction to Chamberlain's Munich, but the Communists really did put up a very good show of defining and opposing the Fascists (I say "a good show" for a reason but it's too complicated to say more), and for as long as that was true, a lot of people put a halo on them for that, then many of them because naively heartbroken.

Tips for writing philosophical texts

2 Jan_Rzymkowski 31 August 2014 10:38PM

For about four years I am struggling to write a series of articles presenting few of my ideas. While this "philosophy" (I'd rather avoid being too pompous about it) is still developing, there is a bunch of stuff of which I have a clear image in my mind. It is a framework for model building, with some possible applications for AI developement, paradox resolving, semantics. Not any serious impact, but I do believe it would prove useful.

I tried making notes or plans for articles several times, but every time I was discouraged by those problems:

  • presented concept is too obvious
  • presented concept is superflous
  • presented concept needs more basic ideas to be introduced beforehand

So the core problem is that to show applications of the theory (or generally more interesing results), more basic concepts must be introduced first. Yet presenting the basics seems boring and uninsightful without the application side. This seems to characterise many complex ideas.

Can you provide me with any practical tips as how to tackle this problem?

Why appearance matters or “to behave as if”

-1 AnnaLeptikon 31 August 2014 07:00PM
"*checking the name of the writer* Ooookay, this article about appearance is written by a woman. As was expected. It's probably not worth to read it..."

If you thought something like this you confirmed how prejudices dominate our mind. And even if you didn't think something like that, you can't argue its importance away.


prejudices and stereotypes

Prejudice is prejudgment, or forming an opinion before becoming aware of the relevant facts of a case. (wikipedia)

The cognitive function of stereotypes is to help make sense of the world. They are a form of categorization that helps to simplify and systematize information. Thus, information is more easily identified, recalled, predicted, and reacted to. (wikipedia)

Prejudices and stereotypes might be useful and also harmful in some situations, but they definitely exist with all their advantages and disadvantages. They are based on the fastest available information. General assumptions about latent variables (such as intelligence and character) are made ​​on external  factors such as behavior and appearance.

Pygmalion effect/self-fulfilling prophecy

The Pygmalion effect, or Rosenthal effect, is the phenomenon whereby the greater the expectation placed upon people, the better they perform.[1] (Or the observer thinks it would be so!) A corollary of the Pygmalion effect is the golem effect, in which low expectations lead to a decrease in performance. (wikipedia)

So what others (and we) expect from us influences how they and we behave and therefore influences our future and what we become!

Confirmation Bias

Confirmation bias, also called myside bias, is the tendency to search for or interpret information in a way that confirms one's beliefs or hypotheses. (wikipedia)

Because it is easier to confirm people in their presumption than to convince them otherwise, it's a good decision to look like you want people to think you are as a person.

Minority influence/innovation

Majority influence refers to the majority trying to produce conformity on the minority, while minority influence is converting the majority to adopt the thinking of the minority group.[1] Unlike other forms of influence, minority influence usually involves a personal shift in private opinion. Minority influence is also a central component of identity politics.(wikipedia)

Minorities have a bigger impact when they: are consistent, are part of the ingroup and differ just in this one point (Idiosynkrasiekredit). As an example, your chances are higher to convince others to legalize cannabis if you wear suits instead of dreadlocks and hippie clothes. Your influence is therefore likely to be bigger if you behave and look like a adapted or even successful person.

Self-evaluation

Since what others think of you will modify your self-evaluation, your appearance will influence your self-evaluation, too. Also by direct feedback when looking in the mirror.


further thoughts/questions:
  • probably only people who thought about this already will be attracted by the topic and read this article ^^
  • in the opposite way: being dressed to well might make you look stupid (imagine a "Barbie" talking about AI)
  • To which extend is it useful to "behave as if"?
  • What do you think about this thoughts in general?

My personal background: I tried lots of different styles (bold, dreadlocks, gothic, sporty, well-dressed ... ) and experienced big effects on how people behaved towards me.(pictures)

Superintelligence reading group

9 KatjaGrace 31 August 2014 02:59PM

In just over two weeks I will be running an online reading group on Nick Bostrom's Superintelligence, on behalf of MIRI. It will be here on LessWrong. This is an advance warning, so you can get a copy and get ready for some stimulating discussion. MIRI's post, appended below, gives the details.


Nick Bostrom’s eagerly awaited Superintelligence comes out in the US this week. To help you get the most out of it, MIRI is running an online reading group where you can join with others to ask questions, discuss ideas, and probe the arguments more deeply.

The reading group will “meet” on a weekly post on the LessWrong discussion forum. For each ‘meeting’, we will read about half a chapter of Superintelligence, then come together virtually to discuss. I’ll summarize the chapter, and offer a few relevant notes, thoughts, and ideas for further investigation. (My notes will also be used as the source material for the final reading guide for the book.)

Discussion will take place in the comments. I’ll offer some questions, and invite you to bring your own, as well as thoughts, criticisms and suggestions for interesting related material. Your contributions to the reading group might also (with permission) be used in our final reading guide for the book.

We welcome both newcomers and veterans on the topic. Content will aim to be intelligible to a wide audience, and topics will range from novice to expert level. All levels of time commitment are welcome.

We will follow this preliminary reading guide, produced by MIRI, reading one section per week.

If you have already read the book, don’t worry! To the extent you remember what it says, your superior expertise will only be a bonus. To the extent you don’t remember what it says, now is a good time for a review! If you don’t have time to read the book, but still want to participate, you are also welcome to join in. I will provide summaries, and many things will have page numbers, in case you want to skip to the relevant parts.

If this sounds good to you, first grab a copy of Superintelligence. You may also want to sign up here to be emailed when the discussion begins each week. The first virtual meeting (forum post) will go live at 6pm Pacific on Monday, September 15th. Following meetings will start at 6pm every Monday, so if you’d like to coordinate for quick fire discussion with others, put that into your calendar. If you prefer flexibility, come by any time! And remember that if there are any people you would especially enjoy discussing Superintelligence with, link them to this post!

Topics for the first week will include impressive displays of artificial intelligence, why computers play board games so well, and what a reasonable person should infer from the agricultural and industrial revolutions.

Robin Hanson's "Overcoming Bias" posts as an e-book.

12 ciphergoth 31 August 2014 01:26PM

At Luke Muehlhauser's request, I wrote a script to scrape all of Robin Hanson's posts to Overcoming Bias into an e-book; here's a first beta release. Please comment here with any problems—posts in the wrong order, broken links, bad formatting, missing posts. Thanks!

 


 

Solstice 2014 / Rational Ritual Retreat - A Call to Arms

9 Raemon 30 August 2014 05:51PM


Summary:

 •  I'm beginning work on the 2014 Winter Solstice. There are a lot of jobs to be done, and the more people who can dedicate serious time to it, the better the end result will be and the more locations it can take place. A few people have volunteered serious time, and I wanted to issue a general call, to anyone who's wanted to be part of this but wasn't sure how. Send me an e-mail at raemon777@gmail.com if you'd like to help with any of the tasks listed below (or others I haven't thought of).

 •  More generally, I think people working on rational ritual, in any form, should be sharing notes and collaborating more. There's a fair number of us, but we're scattered across the country and haven't really felt like part of the same team. And it seems a bit silly for people working on ritual, to be scattered and unified. So I am hosting the first Rational Ritual Retreat at the end of September. The exact date and location have yet to be determined. You can apply at humanistculture.com, noting your availability, and I will determine



The Rational Ritual Retreat

For the past three years, I've been running a winter solstice holiday, celebrating science and human achievement. Several people have come up to me and told me it was one of the most unique, profound experiences they've participated in, inspiring them to work harder to make sure humanity has a bright future. 

I've also had a number of people concerned that I'm messing with dangerous aspects of human psychology, fearing what will happen to a rationality community that gets involved with ritual.

Both of these thoughts are incredibly important. I've written a lot on the value and danger of ritual. [1]

Ritual is central to the human experience. We've used it for thousands of years to bind groups together. It helps us internalize complex ideas. A winning version of rationality needs *some* way of taking complex ideas and getting System 1 to care about them, and I think ritual is at least one tool we should consider.

In the past couple weeks, a few thoughts occurred to me at once:

1) Figuring out a rational approach to ritual that has a meaningful, useful effect on the world will require a lot of coordination among many skilled people.

2) If this project *were* to go badly somehow, I think the most likely reason would be someone copying parts of what I'm working on without understanding all the considerations that went into it, and creating a toxic (or hollow) variant that spirals out of control.

3) Many other people have approached the concept of rational ritual. But we've generally done so independently, often duplicating a lot of the same work and rarely moving on to more interesting and valuable experimentation. When we do experiment, we rarely share notes.

This all prompted a fourth realization:

4) If ritual designers are isolated and poorly coordinated... if we're duplicating a lot of the same early work and not sharing concerns about potential dangers, then one obvious (in retrospect) solution is to have a ritual about ritual creation.

So, the Rational Ritual Retreat. We'll hike out into a dark sky reserve, when there's no light pollution and the Milky Way looms large and beautiful above us. We'll share our stories, our ideas for a culture grounded in rationality yet tapped into our primal human desires. Over the course of an evening we'll create a ceremony or two together, through group consensus and collaboration. We'll experiment with new ideas, aware that some may work well, and some may not - that's how progress is made.

This is my experiment, attempting to answer the question Eliezer raised in "Bayesians vs Barbarians." It just seems really exceptionally silly to me that people motivated by rationality AND ritual should be so uncoordinated. 

Whether you're interested directly creating ritual, or helping to facilitate its creation in one way or another (helping with art, marketing, logistics or funding of future projects), you are invited to attend. The location is currently undecided - there are reasons to consider the West Coast, East Coast or (if there's enough interest in both locations) both. 

Send in a brief application so I can make decisions about where and when to host it. I'll make the final decisions this upcoming Friday.

 


The Winter Solstice

The Retreat is part of a long-term vision, of many people coming together to produce a culture (undoubtably, with numerous subcultures focusing on different aesthetics). Tentatively, I'd expect a successful rational-ritual culture to look sort of Open Source ish. (Or, more appropriately - I'd expect it to look like Burning Man. To be clear, Burning Man and variations already exist, my goal is not to duplicate that effort. It's to create something that's a) easier to integrate into people's lives, and b) specifically focuses on rationality and human progress)

The Winter Solstice project as (at least for now) an important piece of that, partly because of the particular ideas it celebrates, but also because it's a demonstration of how you create *any* cultural holiday from scratch that celebrates serious ideas in a non-ironic fashion.

My minimum goal this year is to finish the Hymnal, put more material online to help people create their own private events, and run another largish event in NYC. My stretch goals are to have a high quality public event in Boston and San Francisco. (Potentially other places if a lot of local people are interested and are willing to do the legwork). 

My hope, to make those stretch goals possible, is to find collaborators willing to put in a fair amount of work. I'm specifically looking for people who can:

  • Creative Collaboration. Want to perform, create music, visual art, or host an event in your city?
  • Help with logistics, especially in different cities. (Finding venues, arranging catering, etc)
  • Marketing, reaching out to bloggers, or creating images or videos for the social media campaign.
  • Helping with technical aspects of production for the Hymnal (editing, figuring out best places

Each of these are things I'm able to do, but I have limited time, and the more time I can focus on creating

If you're interested in collaborating, volunteering, or running a local event, either reply here or send me an e-mail at raemon777@gmail.com 

 

 

[LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI

17 Sarokrae 30 August 2014 02:04PM

http://www.theguardian.com/technology/2014/aug/30/saviours-universe-four-unlikely-men-save-world

The article is titled "The scientific A-Team saving the world from killer viruses, rogue AI and the paperclip apocalypse", and features interviews with Martin Rees, Huw Price, Jaan Tallinn and Partha Dasgupta. The author takes a rather positive tone about CSER and MIRI's endeavours, and mentions x-risks other than AI (bioengineered pandemic, global warming with human interference, distributed manufacturing).

I find it interesting that the inferential distance for the layman to the concept of paperclipping AI is much reduced by talking about paperclipping America, rather than the entire universe: though the author admits still struggling with the concept. Unusually for an journalist who starts off unfamiliar with these concepts, he writes in a tone that suggests that he takes the ideas seriously, without the sort of "this is very far-fetched and thus I will not lower myself to seriously considering it" countersignalling usually seen with x-risk coverage. There is currently the usual degree of incredulity in the comments section though.

For those unfamiliar with The Guardian, it is a British left-leaning newspaper with a heavy focus on social justice and left-wing political issues. 

Meetup Report Thread: September 2014

7 Viliam_Bur 30 August 2014 12:32PM

If you had an interesting Less Wrong meetup recently, but don't have the time to write up a big report to post to Discussion, feel free to write a comment here.  Even if it's just a couple lines about what you did and how people felt about it, it might encourage some people to attend meetups or start meetups in their area.

If you have the time, you can also describe what types of exercises you did, what worked and what didn't.  This could help inspire meetups to try new things and improve themselves in various ways.

If you're inspired by what's posted below and want to organize a meetup, check out this page for some resources to get started!  You can also check FrankAdamek's weekly post on meetups for the week.

Previous Meetup Report Thread: February 2014

 

Guidelines:  Please post the meetup reports as top-level comments, and debate the specific meetup below its comment.  Anything else goes under the "Meta" top-level comment.  The title of this thread should be interpreted as "up to and including September 2014", which means feel free to post reports of meetups that happened in August, July, June, etc.

LessWrong Hamburg Meetup Notes - Diet

2 Gunnar_Zarncke 30 August 2014 09:40AM

Review of our LessWrong Hamburg Meetup - Diet

After I was approched a few times about another meetup I scheduled it on short notice and six of us met yesterday evening at my place.

Summary

It was an mostly unstructured talk where we discussed diet from different angles and a few other tangential topics. I also reported from my participation in the LW Berlin Meetup a few weeks ago (which led to a side-track about polyphasic sleep).

We discussed the benefits and risks of misc. dietary recommendations and seemed to agree on most points, most of which coincide with those discussed on LW before:

Links about polyphasic sleep:

Other LW Hamburg Meetup reviews

Funding cannibalism motivates concern for overheads

16 Thrasymachus 30 August 2014 12:42AM

Summary: Overhead expenses' (CEO salary, percentage spent on fundraising) are often deemed a poor measure of charity effectiveness by Effective Altruists, and so they disprefer means of charity evaluation which rely on these. However, 'funding cannibalism' suggests that these metrics (and the norms that engender them) have value: if fundraising is broadly a zero-sum game between charities, then there's a commons problem where all charities could spend less money on fundraising and all do more good, but each is locally incentivized to spend more. Donor norms against increasing spending on zero-sum 'overheads' might be a good way of combating this. This valuable collective action of donors may explain the apparent underutilization of fundraising by charities, and perhaps should make us cautious in undermining it.

The EA critique of charity evaluation

Pre-Givewell, the common means of evaluating charities (GuidestarCharity Navigator) used a mixture of governance checklists 'overhead indicators'. Charities would gain points both for having features associated with good governance (being transparent in the right ways, balancing budgets, the right sorts of corporate structure), but also in spending its money on programs and avoiding 'overhead expenses' like administration and (especially) fundraising. For shorthand, call this 'common sense' evaluation.

The standard EA critique is that common sense evaluation doesn't capture what is really important: outcomes. It is easy to imagine charities that look really good to common sense evaluation yet have negligible (or negative) outcomes.  In the case of overheads, it becomes unclear whether these are even proxy measures of efficacy. Any fundraising that still 'turns a profit' looks like a good deal, whether it comprises five percent of a charity's spending or fifty.

A summary of the EA critique of common sense evaluation that its myopic focus on these metrics gives pathological incentives, as these metrics frequently lie anti-parallel to maximizing efficacy. To score well on these evaluations, charities may be encouraged to raise less money, hire less able staff, and cut corners in their own management, even if doing these things would be false economies.

 

Funding cannibalism and commons tragedies

In the wake of the ALS 'Ice bucket challenge', Will MacAskill suggested there is considerable of 'funding cannabilism' in the non-profit sector. Instead of the Ice bucket challenge 'raising' money for ALS, it has taken money that would have been donated to other causes instead - cannibalizing other causes. Rather than each charity raising funds independently of one another, they compete for a fairly fixed pie of aggregate charitable giving.

The 'cannabilism' thesis is controversial, but looks plausible to me, especially when looking at 'macro' indicators: proportion of household charitable spending looks pretty fixed whilst fundraising has increased dramatically, for example.

If true, cannibalism is important. As MacAskill points out, the money tens of millions of dollars raised for ALS is no longer an untrammelled good, alloyed as it is with the opportunity cost of whatever other causes it has cannibalized (q.v.). There's also a more general consideration: if there is a fixed pot of charitable giving insensitive to aggregate fundraising, then fundraising becomes a commons problem. If all charities could spend less on their fundraising, none would lose out, so all could spend more of their funds on their programs. However, for any alone to spend less on fundraising allows the others to cannibalize it.

 

Civilizing Charitable Cannibals, and Metric Meta-Myopia

Coordination among charities to avoid this commons tragedy is far fetched. Yet coordination of  donors on shared norms about 'overhead ratio' can help. By penalizing a charity for spending too much on zero-sum games with other charities like fundraising, donors can stop a race to the bottom fundraising free for all and burning of the charitable commons that implies. The apparently-high marginal return to fundraising might suggest this is already in effect (and effective!)

The contrarian take would be that it is the EA critique of charity evaluation which is myopic, not the charity evaluation itself - by looking at the apparent benefit for a single charity of more overhead, the EA critique ignores the broader picture of the non-profit ecosystem, and their attack undermines a key environmental protection of an important commons - further, one which the right tail of most effective charities benefit from just as much as the crowd of 'great unwashed' other causes. (Fundraising ability and efficacy look like they should be pretty orthogonal. Besides, if they correlate well enough that you'd expect the most efficacious charities would win the zero-sum fundraising game, couldn't you dispense with Givewell and give to the best fundraisers?)

The contrarian view probably goes too far. Although there's a case for communally caring about fundraising overheads, as cannibalism leads us to guess it is zero sum, parallel reasoning is hard to apply to administration overhead: charity X doesn't lose out if charity Y spends more on management, but charity Y is still penalized by common sense evaluation even if its overall efficacy increases. I'd guess that features like executive pay lie somewhere in the middle: non-profit executives could be poached by for-profit industries, so it is not as simple as donors prodding charities to coordinate to lower executive pay; but donors can prod charities not to throw away whatever 'non-profit premium' they do have in competing with one another for top talent (c.f.). If so, we should castigate people less for caring about overhead, even if we still want to encourage them to care about efficacy too.

The invisible hand of charitable pan-handling

If true, it is unclear whether the story that should be told is 'common sense was right all along and the EA movement overconfidently criticised' or 'A stopped clock is right twice a day, and the generally wrong-headed common sense had an unintended feature amongst the bugs'. I'd lean towards the latter, simply the advocates of the common sense approach have not (to my knowledge) articulated these considerations themselves.

However, many of us believe the implicit machinery of the market can turn without many of the actors within it having any explicit understanding of it. Perhaps the same applies here. If so, we should be less confident in claiming the status quo is pathological and we can do better: there may be a rationale eluding both us and its defenders.

[question] Recommendations for fasting

3 Gunnar_Zarncke 30 August 2014 12:36AM

I consider fasting for two weeks in October, but I'm unclear about it being beneficial in general or for what kind of fasting it might be beneficial and healthy. Thus this is a kind of request for rational discussion of this topic.

I looked for relevant LW posts but couldn't find clear evidence. I think this is an underrepresented and possibly underutilized lifestyle intervention.

continue reading »

The Great Filter is early, or AI is hard

15 Stuart_Armstrong 29 August 2014 04:17PM

Attempt at the briefest content-full Less Wrong post:

Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.

Weekly LW Meetups

1 FrankAdamek 29 August 2014 03:43PM

Calibrating your probability estimates of world events: Russia vs Ukraine, 6 months later.

16 shminux 28 August 2014 11:37PM

Some of the comments on the link by James_Miller exactly six months ago provided very specific estimates of how the events might turn out:

James_Miller:

  • The odds of Russian intervening militarily = 40%.
  • The odds of the Russians losing the conventional battle (perhaps because of NATO intervention) conditional on them entering = 30%.
  • The odds of the Russians resorting to nuclear weapons conditional on them losing the conventional battle = 20%.

Me:

"Russians intervening militarily" could be anything from posturing to weapon shipments to a surgical strike to a Czechoslovakia-style tank-roll or Afghanistan invasion. My guess that the odds of the latter is below 5%.

A bet between James_Miller and solipsist:

I will bet you $20 U.S. (mine) vs $100 (yours) that Russian tanks will be involved in combat in the Ukraine within 60 days. So in 60 days I will pay you $20 if I lose the bet, but you pay me $100 if I win.

While it is hard to do any meaningful calibration based on a single event, there must be lessons to learn from it. Given that Russian armored columns are said to capture key Ukrainian towns today, the first part of James_Miller's prediction has come true, even if it took 3 times longer than he estimated.

Note that even the most pessimistic person in that conversation (James) was probably too optimistic. My estimate of 5% appears way too low in retrospect, and I would probably bump it to 50% for a similar event in the future.

Now, given that the first prediction came true, how would one reevaluate the odds of the two further escalations he listed? I still feel that there is no way there will be a "conventional battle" between Russia and NATO, but having just been proven wrong makes me doubt my assumptions. If anything, maybe I should give more weight to what James_Miller (or at least Dan Carlin) has to say on the issue. And if I had any skin in the game, I would probably be even more cautious.


Hal Finney has just died.

29 cousin_it 28 August 2014 07:39PM

Rationalist house

2 Elo 27 August 2014 10:52PM

At the Australia online hangout; one of the topics we discussed (before I fell asleep on camera for a bunch of people) Was writing a rationality TV show as an outreach task.  Of course there being more ways for this to go wrong than right I figured its worth mentioning the ideas and getting some comments.

The strategy is to have a set of regular characters who's rationality behaviour seems nuts.  Effectively sometimes because it is; when taken out of context.  Then to have one "blank" person who tries to join - "rationality house". and work things out.  My aim was to have each episode straw man a rationality behaviour and then steelman it.  Where by the end of the episode it saves the day; makes someone happy; achieves a goal - or some other <generic win-state>.

Here is a list of notes of characters from the hangout or potential topics to talk about.

  • No showers. Bacterial showers
  • Stopwatches everywhere
  • temperature controls everywhere, light controls.
  • radical honesty person.
  • Soylent only eating person
  • born-again atheist
  • bayesian person
  • Polyphasic sleep cycles.
I have not written much in my life and certainly never anything for TV but it sounds like a fun project.  I figured I would pick a pilot idea; roll with it and see if I can make a script.  I could probably also get Sydney folk to act for a first-round web-cast version.

I was wondering if anyone had any other rationality topics that can be easily strawmanned then steelmanned worth adding to the list.  And if anyone had experience worth sharing with writing for TV, as well as anyone interested in joining the project to write or be a sounding board...


[LINK] Could a Quantum Computer Have Subjective Experience?

15 shminux 26 August 2014 06:55PM

Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:

  • Is an FHE-encrypted sim with a lost key conscious?
  • If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
  • Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)

Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable". 

Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".

There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.

I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.

Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already. 

 

Reverse engineering of belief structures

4 Stefan_Schubert 26 August 2014 06:00PM

(Cross-posted from my blog.)

Since some belief-forming processes are more reliable than others, learning by what processes different beliefs were formed is for several reasons very useful. Firstly, if we learn that someone's belief that p (where p is a proposition such as "the cat is on the mat") was formed a reliable process, such as visual observation under ideal circumstances, we have reason to believe that p is probably true. Conversely, if we learn that the belief that p was formed by an unreliable process, such as motivated reasoning, we have no particular reason to believe that p is true (though it might be - by luck, as it were). Thus we can use knowledge about the process that gave rise to the belief that p to evaluate the chance that p is true.

Secondly, we can use knowledge about belief-forming processes in our search for knowledge. If we learn that some alleged expert's beliefs are more often than not caused by unreliable processes, we are better off looking for other sources of knowledge. Or, if we learn that the beliefs we acquire under certain circumstances - say under emotional stress - tend to be caused by unreliable processes such as wishful thinking, we should cease to acquire beliefs under those circumstances.

Thirdly, we can use knowledge about others' belief-forming processes to try to improve them. For instance, if it turns out that a famous scientist has used outdated methods to arrive at their experimental results, we can announce this publically. Such "shaming" can be a very effective means to scare people to use more reliable methods, and will typically not only have an effect on the shamed person, but also on others who learn about the case. (Obviously, shaming also has its disadvantages, but my impression is that it has played a very important historical role in the spreading of reliable scientific methods.)

 

A useful way of inferring by what process a set of beliefs was formed is by looking at its structure. This is a very general method, but in this post I will focus on how we can infer that a certain set of beliefs most probably was formed by (politically) motivated cognition. Another use is covered here and more will follow in future posts.

Let me give two examples. Firstly, suppose that we give American voters the following four questions:

  1. Do expert scientists mostly agree that genetically modified foods are safe?
  2. Do expert scientists mostly agree that radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities?
  3. Do expert scientists mostly agree that global temperatures are rising due to human activities?
  4. Do expert scientists mostly agree that the "intelligent design" theory is false?

The answer to all of these questions is "yes".* Now suppose that a disproportionate number of republicans answer "yes" to the first two questions, and "no" to the third and the fourth questions, and that a disproportionate number of democrats answer "no" to the first two questions, and "yes" to the third and the fourth questions. In the light of what we know about motivated cognition, these are very suspicious patterns or structures of beliefs, since that it is precisely the patterns we would expect them to arrive at given the hypothesis that they'll acquire whatever belief on empirical questions that suit their political preferences. Since no other plausibe hypothesis seem to be able to explain these patterns as well, this confirms this hypothesis. (Obviously, if we were to give the voters more questions and their answers would retain their one-sided structure, that would confirm the hypothesis even stronger.)

Secondly, consider a policy question - say minimum wages - on which a number of empirical claims have bearing. For instance, these empirical claims might be that minimum wages significantly decrease employers' demand for new workers, that they cause inflation, that they significantly increase the supply of workers (since they provide stronger incentives to work) and that they significantly reduce workers' tendency to use public services (since they now earn more). Suppose that there are five such claims which tell in favour of minimum wages and five that tell against them, and that you think that each of them has a roughly 50 % chance of being true. Also, suppose that they are probabilistically independent of each other, so that learning that one of them is true does not affect the probabilities of the other claims.

Now suppose that in a debate, all proponents of minimum wages defend all of the claims that tell in favour of minimum wages, and reject all of the claims that tell against them, and vice versa for the opponents of minimum wages. Now this is a very surprising pattern. It might of course be that one side is right across the board, but given your prior probability distribution (that the claims are independent and have a 50 % probability of being true) a more reasonable interpretation of the striking degree of coherence within both sides is, according to your lights, that they are both biased; that they are both using motivated cognition. (See also this post for more on this line of reasoning.)

The difference between the first and the second case is that in the former, your hypothesis that the test-takers are biased is based on the fact that they are provably wrong on certain questions, whereas in the second case, you cannot point to any issue where any of the sides is provably wrong. However, the patterns of their claims are so improbable given the hypothesis that they have reviewed the evidence impartially, and so likely given the hypothesis of bias, that they nevertheless strongly confirms the latter. What they are saying is simply "too good to be true".


These kinds of arguments, in which you infer a belief-forming process from a structure of beliefs (i.e you reverse engineer the beliefs), have of course always been used. (A salient example is Marxist interpretations of "bourgeois" belief structures, which, Marx argued, supported their material interests to a suspiciously high degree.) Recent years have, however, seen a number of developments that should make them less speculative and more reliable and useful.

Firstly, psychological research such as Tversky and Kahneman's has given us a much better picture of the mechanisms by which we acquire beliefs. Experiments have shown that we fall prey to an astonishing list of biases and identified which circumstances that are most likely to trigger them. 

Secondly, a much greater portion of our behaviour is now being recorded, especially on the Internet (where we spend an increasing share of our time). This obviously makes it much easier to spot suspicious patterns of beliefs.

Thirdly, our algorithms for analyzing behaviour are quickly improving. FiveLabs recently launched a tool that analyzes your big five personality traits on the basis of your Facebook posts. Granted, this tool does not seem completely accurate, and inferring bias promises to be a harder task (since the correlations are more complicated than that between usage of exclamation marks and extraversion, or that betwen using words such as "nightmare" and "sick of" and neuroticism). Nevertheless, better algorithms and more computer power will take us in the right direction.

 

In my view, there is thus a large untapped potential to infer bias from the structure of people's beliefs, which in turn would be inferred from their online behaviour. In coming posts, I intend to flesh out my ideas on this in some more details. Any comments are welcome and might be incorporated in future posts.

 

* The second and the third questions are taken from a paper by Dan Kahan et al, which refers to the US National Academy of Sciences (NAS) assessment of expert scientists' views on these questions. Their study shows that many conservatives don't believe that experts agree on climate change, whereas a fair number of liberals think experts don't agree that nuclear storage is safe, confirming the hypothesis that people let their political preferences influence their empirical beliefs. The assessment of expert consensus on the first and fourth question are taken from Wikipedia.

Asking people what they think about the expert consensus on these issues, rather than about the issues themselves, is good idea, since it's much easier to come to an agreement on what the true answer is on the former sort of question. (Of course, you can deny that professors from prestigious universities count as expert scientists, but that would be a quite extreme position that few people hold.) 

Changes to my workflow

25 paulfchristiano 26 August 2014 05:29PM

About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one.

For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements).

Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion.

Splitting days:

I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that.

WasteNoTime:

I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an average of 10-15 minutes. It doesn't seem to have been replaced by lower-quality leisure, but by a combination of work and higher-quality leisure.

Similarly, I turned off the newsfeed in facebook, which I found to improve the quality of my internet time in general (the primary issue was that I would sometimes be distracted by the newsfeed while sending messages over facebook, which wasn't my favorite way to use up wastenotime minutes).

I also tried StayFocusd, but ended up adopting WasteNoTime because of the ability to set limits per half-day (via "At work" and "not at work" timers) rather than per-day. I find that the main upside is cutting off the tail of derping (e.g. getting sucked into a blog comment thread, or looking into a particularly engrossing issue), and for this purpose per half-day timers are much more effective.

Email discipline:

I set gmail to archive all emails on arrival and assign them the special label "In." This lets me to search for emails and compose emails, using the normal gmail interface, without being notified of new arrivals. I process the items with label "in" (typically turning emails into todo items to be processed by the same system that deals with other todo items) at the beginning of each half day. Each night I scan my email quickly for items that require urgent attention. 

Todo lists / reminders:

I continue to use todo lists for each half day and for a range of special conditions. I now check these lists at the beginning of each half day rather than before going to bed.

I also maintain a third list of "reminders." These are things that I want to be reminded of periodically, organized by day; each morning I look at the day's reminders and think about them briefly. Each of them is copied and filed under a future day. If I feel like I remember a thing well I file it in far in the future, if I feel like I don't remember it well I file it in the near future.

Over the last month most of these reminders have migrated to be in the form "If X, then Y," e.g. "If I agree to do something for someone, then pause, say `actually I should think about it for a few minutes to make sure I have time,' and set a 5 minute timer that night to think about it more clearly." These are designed to fix problems that I notice when reflecting on the day. This is a recommendation from CFAR folks, which seems to be working well, though is the newest part of the system and least tested.

Isolating "todos":

I now attempt to isolate things that probably need doing, but don't seem maximally important; I aim to do them only on every 5th day, and only during one half-day. If I can't finish them in this time, I will typically delay them 5 days. When they spill over to other days, I try to at least keep them to one half-day or the other. I don't know if this helps, but it feels better to have isolated unproductive-feeling blocks of time rather than scattering it throughout the week.

I don't do this very rigidly. I expect the overall level of discipline I have about it is comparable to or lower than a normal office worker who has a clearer division between their personal time and work time.

Toggl:

I now use Toggl for detailed time tracking. Katja Grace and I experimented with about half a dozen other systems (Harvest, Yast, Klok, Freckle, Lumina, I expect others I'm forgetting) before settling on Toggl. It has a depressing number of flaws, but ends up winning for me by making it very fast to start and switch timers which is probably the most important criterion for me. It also offers reviews that work out well with what I want to look at.

I find the main value adds from detailed time tracking are:

1. Knowing how long I've spent on projects, especially long-term projects. My intuitive estimates are often off by more than a factor of 2, even for things taking 80 hours; this can lead me to significantly underestimate the costs of taking on some kinds of projects, and it can also lead me to think an activity is unproductive instead of productive by overestimating how long I've actually spent on it.

2. Accurate breakdowns of time in a day, which guide efforts at improving my day-to-day routine. They probably also make me feel more motivated about working, and improve focus during work.

Reflection / improvement:

Reflection is now a smaller fraction of my time, down from 10% to 3-5%, based on diminishing returns to finding stuff to improve. Another 3-5% is now redirected into longer-term projects to improve particular aspects of my life (I maintain a list of possible improvements, roughly sorted by goodness). Examples: buying new furniture, improvements to my diet (Holden's powersmoothie is great), improvements to my sleep (low doses of melatonin seem good). At the moment the list of possible improvements is long enough that adding to the list is less valuable than doing things on the list.

I have equivocated a lot about how much of my time should go into this sort of thing. My best guess is the number should be higher.

-Pomodoros:

I don't use pomodoros at all any more. I still have periods of uninterrupted work, often of comparable length, for individual tasks. This change wasn't extremely carefully considered, it mostly just happened. I find explicit time logging (such that I must consciously change the timer before changing tasks) seems to work as a substitute in many cases. I also maintain the habit of writing down candidate distractions and then attending to them later (if at all).

For larger tasks I find that I often prefer longer blocks of unrestricted working time. I continue to use Alinof timer to manage these blocks of uninterrupted work.

-Catch:

Catch disappeared, and I haven't found a replacement that I find comparably useful. (It's also not that high on the list of priorities.) I now just send emails to myself, but I do it much less often.

-Beeminder:

I no longer use beeminder. This again wasn't super-considered, though it was based on a very rough impression of overhead being larger than the short-term gains. I think beeminder was helpful for setting up a number of habits which have persisted (especially with respect to daily routine and regular focused work), and my long-term averages continue to satisfy my old beeminder goals.

Project outlines:

I now organize notes about each project I am working on in a more standardized way, with "Queue of todos," "Current workspace," and "Data" as the three subsections. I'm not thrilled by this system, but it seems to be an improvement over the previous informal arrangement. In particular, having a workspace into which I can easily write thoughts without thinking about where they fit, and only later sorting them into the data section once it's clearer how they fit in, decreases the activation energy of using the system. I now use Toggl rather than maintaining time logs by hand.

Randomized trials:

As described in my last post I tried various randomized trials (esp. of effects of exercise, stimulant use, and sleep on mood, cognitive performance, and productive time). I have found extracting meaningful data from these trials to be extremely difficult, due to straightforward issues with signal vs. noise. There are a number of tests which I still do expect to yield meaningful data, but I've increased my estimates for the expensiveness of useful tests substantially, and they've tended to fall down the priority list. For some things I've just decided to do them without the data, since my best guess is positive in expectation and the data is too expensive to acquire.

 

The immediate real-world uses of Friendly AI research

4 ancientcampus 26 August 2014 02:47AM

Much of the glamor and attention paid toward Friendly AI is focused on the misty-future event of a super-intelligent general AI, and how we can prevent it from repurposing our atoms to better run Quake 2. Until very recently, that was the full breadth of the field in my mind. I recently realized that dumber, narrow AI is a real thing today, helpfully choosing advertisements for me and running my 401K. As such, making automated programs safe to let loose on the real world is not just a problem to solve as a favor for the people of tomorrow, but something with immediate real-world advantages that has indeed already been going on for quite some time. Veterans in the field surely already understand this, so this post is directed at people like me, with a passing and disinterested understanding of the point of Friendly AI research, and outlines an argument that the field may be useful right now, even if you believe that an evil AI overlord is not on the list of things to worry about in the next 40 years.

 

Let's look at the stock market. High-Frequency Trading is the practice of using computer programs to make fast trades constantly throughout the day, and accounts for more than half of all equity trades in the US. So, the economy today is already in the hands of a bunch of very narrow AIs buying and selling to each other. And as you may or may not already know, this has already caused problems. In the “2010 Flash Crash”, the Dow Jones suddenly and mysteriously hit a massive plummet only to mostly recover within a few minutes. The reasons for this were of course complicated, but it boiled down to a couple red flags triggering in numerous programs, setting off a cascade of wacky trades.

 

The long-term damage was not catastrophic to society at large (though I'm sure a couple fortunes were made and lost that day), but it illustrates the need for safety measures as we hand over more and more responsibility and power to processes that require little human input. It might be a blue moon before anyone makes true general AI, but adaptive city traffic-light systems are entirely plausible in upcoming years.

 

To me, Friendly AI isn't solely about making a human-like intelligence that doesn't hurt us – we need techniques for testing automated programs, predicting how they will act when let loose on the world, and how they'll act when faced with unpredictable situations. Indeed, when framed like that, it looks less like a field for “the singularitarian cultists at LW”, and more like a narrow-but-important specialty in which quite a bit of money might be made.

 

After all, I want my self-driving car.

 

(To the actual researchers in FAI – I'm sorry if I'm stretching the field's definition to include more than it does or should. If so, please correct me.)

Persistent Idealism

9 jkaufman 26 August 2014 01:38AM

When I talk to people about earning to give, it's common to hear worries about "backsliding". Yes, you say you're going to go make a lot of money and donate it, but once you're surrounded by rich coworkers spending heavily on cars, clothes, and nights out, will you follow through? Working at a greedy company in a selfishness-promoting culture you could easily become corrupted and lose initial values and motivation.

First off, this is a totally reasonable concern. People do change, and we are pulled towards thinking like the people around us. I see two main ways of working against this:

  1. Be public with your giving. Make visible commitments and then list your donations. This means that you can't slowly slip away from giving; either you publish updates saying you're not going to do what you said you would, or you just stop updating and your pages become stale. By making a public promise you've given friends permission to notice that you've stopped and ask "what changed?"
  2. Don't just surround yourself with coworkers. Keep in touch with friends and family. Spend some time with other people in the effective altruism movement. You could throw yourself entirely into your work, maximizing income while sending occasional substantial checks to GiveWell's top picks, but without some ongoing engagement with the community and the research this doesn't seem likely to last.

One implication of the "won't you drift away" objection, however, is often that if instead of going into earning to give you become an activist then you'll remain true to your values. I'm not so sure about this: many people who are really into activism and radical change in their 20s have become much less ambitious and idealistic by their 30s. You can call it "burning out" or "selling out" but decreasing idealism with age is very common. This doesn't mean people earning to give don't have to worry about losing their motivation—in fact it points the opposite way—but this isn't a danger unique to the "go work at something lucrative" approach. Trying honestly to do the most good possible is far from the default in our society, and wherever you are there's going to be pressure to do the easy thing, the normal thing, and stop putting so much effort into altruism.

Open thread, 25-31 August 2014

3 jaime2000 25 August 2014 11:14AM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Bayesianism for humans: prosaic priors

16 BT_Uytya 24 August 2014 11:14PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before. 
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain.This post is about the second penny.

Prosaic Priors

The second insight can be formulated as «the dull explanations are more likely to be correct because they tend to have high prior probability.»

Why is that? 

1) Almost by definition! Some property X is 'banal' if X applies to a lot of people in an disappointingly mundane way, not having any redeeming features which would make it more rare (and, hence, interesting).

In the other words, X is banal iff base rate of X is high. Or, you can say, prior probability of X is high.

1.5) Because of Occam's Razor and burdensome details. One way to make something boring more exciting is to add interesting details: some special features which will make sure that this explanation is about you as opposed to 'about almost anybody'.

This could work the other way around: sometimes the explanation feels unsatisfying exactly because it was shaved of any unnecessary and (ultimately) burdensome details.

2) Often, the alternative of a mundane explanation is something unique and custom made to fit the case you are interested in. And anybody familiar with overfitting and conjunction fallacy (and the fact that people tend to love coherent stories with blinding passion1) should be very suspicious about such things. So, there could be a strong bias against stale explanations, which should  be countered.

* * *

I fully grokked this when being in process of CBT-induced soul-searching; usage in this context still looks the most natural to me, but I believe that the area of application of this heuristic is wider.

Examples

1) I'm fairly confident that I'm an introvert. Still, sometimes I can behave like an extrovert. I was interested in the causes of this "extroversion activation", as I called it2. I suspected that I really had two modes of functioning (with "introversion" being the default one), and some events — for example, mutual interest (when I am interested in a person I was talking to, and xe is interested in me) or feeling high-status — made me switch between them.

Or, you know, it could be just reduction in a social anxiety, which makes people more communicative. Increased anxiety levels wasn't a new element to be postulated; I already knew I had it, yet I was tempted to make up new mental entities, and prosaic explanation about anxiety managed to avoid me for a while.

2) I find it hard to do something I consider worthwhile while on a spring break, despite having lots of a free time. I tend to make grandiose plans — I should meet new people! I should be more involved in sports! I should start using Anki! I should learn Lojban! I should practice meditation! I should read these textbooks including doing most of exercises! — and then fail to do almost anything. Yet I manage to do some impressive stuff during academic term, despite having less time and more commitments.

This paradoxical situation calls for explanation.

The first hypothesis that came to my mind was about activation energy. It takes effort to go  from "procrastinating" to "doing something"; speaking more generally, you can say that it takes effort to go from "lazy day" to "productive day". During the academic term, I am forced to make most of my days productive: I have to attend classes, do homework, etc. And, already having done something good, I can do something else as well. During spring break, I am deprived of that natural structure, and, hence I am on my own in terms of starting doing something I find worthwhile.

The alternative explanation: I was tired. Because, you know, vacation comes right after midterms, and I tend to go all out while preparing for midterms. I am exhausted, my energy and willpower are scarce, so it's no wonder I am having trouble utilizing it.

(I don't really believe in the latter explanation (I think that my situation is caused by several factors, including two outlined above), so it is also an example of descriptive "probable enough" hypothesis)

3) This example comes from Slate Star Codex. Nerds tend to find aversive many group bonding activities usual people supposedly enjoy, such as patriotism, prayer, team sports, and pep rallies. Supposedly, they should feel (with a tear-jerking passion of thousand exploding suns) the great unity with their fellow citizens, church-goers, teammates or pupils respectively, but instead they feel nothing.

Might it be that nerds are unable to enjoy these activities because something is broken inside their brains? One could be tempted to construct an elaborate argument involving autism spectrum and a mild case of schizoid personality disorder. In other words, this calls for postulating a rare form of autism which affects only some types of social behaviour (perception of group activities), leaving other types unchanged.

Or, you know, maybe nerds just don't like the group they are supposed to root for. Maybe nerds don't feel unity and relationship to The Great Whole because they don't feel like they truly belong here.

As Scott put it, "It’s not that we lack the ability to lose ourselves in an in-group, it’s that all the groups people expected us to lose ourselves in weren’t ones we could imagine as our in-group by any stretch of the imagination"3.

4) This example comes from this short comic titled "Sherlock Holmes in real life".

* * *

...and after this the word "prosaic" quickly turned into an awesome compliment. Like, "so, this hypothesis explains my behaviour well; but is it boring enough?", or "your claim is refreshingly dull; I like it!".


1. If you had read Thinking: Fast and Slow, you probably know what I mean. If you hadn't, you can look at narrative fallacy in order to get a general idea.
2. Which was, as I now realize, an excellent way to deceive myself via using word with a lot of hidden assumptions. Taboo your words, folks!
3. As a side note, my friend proposed an alternative explanation: the thing is, often nerds are defined as "sort of people who dislike pep rallies". So, naturally, we have "usual people" who like pep rallies and "nerds" who avoid them. And then "nerds dislike pep rallies" is tautology rather than something to be explained.

Announcing The Effective Altruism Forum

25 RyanCarey 24 August 2014 08:07AM

The Effective Altruism Forum will be launched at effective-altruism.com on September 10, British time.

Now seems like a good time time to discuss why we might need an Effective Altruism Forum, and how it might compare to LessWrong.

About the Effective Altruism Forum

The motivation for the Effective Altruism Forum is to improve the quality of effective altruist discussion and coordination. A big part of this is to give many of the useful features of LessWrong to effective altruists, including:

 

  • Archived, searchable content (this will begin with archived content from effective-altruism.com)
  • Meetups
  • Nested comments
  • A karma system
  • A dynamically upated list of external effective altruist blogs
  • Introductory materials (this will begin with these articles)

 

The Effective Altruism Forum has been designed by Mihai Badic. Over the last month, it has been developed by Trike Apps, who have built the new site using the LessWrong codebase. I'm glad to report that it is now basically ready, looks nice, and is easy to use.

I expect that at the new forum, as on the effective altruist Facebook and Reddit pages, people will want to discuss the which intellectual procedures to use to pick effective actions. I also expect some proposals of effective altruist projects, and offers of resources. So users of the new forum will share LessWrong's interest in instrumental and epistemic rationality. On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory, and to the extent that they do so, they will want to do it at LessWrong. As a result, I  expect the new forum to cause:

 

  • A bunch of materials on effective altruism and instrumental rationality to be collated for new effective altruists
  • Discussion of old LessWrong materials to resurface
  • A slight increase to the number of users of LessWrong, possibly offset by some users spending more of their time posting at the new forum.

 

At least initially, the new forum won't have a wiki or a Main/Discussion split and won't have any institutional affiliations.

Next Steps:

It's really important to make sure that the Effective Altruism Forum is established with a beneficial culture. If people want to help that process by writing some seed materials, to be posted around the time of the site's launch, then they can contact me at ry [dot] duff [at] gmail.com. Alternatively, they can wait a short while until they automatically receive posting priveleges.

It's also important that the Effective Altruism Forum helps the shared goals of rationalists and effective altruists, and has net positive effects on LessWrong in particular. Any suggestions for improving the odds of success for the effective altruism forum are most welcome.

[Link] Feynman lectures on physics

9 Mark_Friedenbach 23 August 2014 08:14PM

The Feynman lectures on physics are now available to read online for free. This is a classic resource for not just learning physics also but also the process of science and the mindset of a scientific rationalist.

Bayesianism for humans: "probable enough"

26 BT_Uytya 23 August 2014 05:57PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before. 
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain. Post about second penny will be up tomorrow, or a bit later.


"Probable enough"

When you have eliminated the impossible, whatever  remains is often more improbable than your having made a mistake in one  of your impossibility proofs.


Bayesian way of thinking introduced me to the idea of "hypothesis which is probably isn't true, but probable enough to rise to the level of conscious attention" — in other words, to the situation when P(H) is notable but less than 50%.

Looking back, I think that the notion of taking seriously something which you don't think is true was alien to me. Hence, everything was either probably true or probably false; things from the former category were over-confidently certain, and things from the latter category were barely worth thinking about.

This model was correct, but only in a formal sense.

Suppose you are living in Gotham, the city famous because of it's crime rate and it's masked (and well-funded) vigilante, Batman. Recently you had read The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker, and according to some theories described here, Batman isn't good for Gotham at all.

Now you know, for example, the theory of Donald Black that "crime is, from the point of view of the perpetrator, the pursuit of justice". You know about idea that in order for crime rate to drop, people should perceive their law system as legitimate. You suspect that criminals beaten by Bats don't perceive the act as a fair and regular punishment for something bad, or an attempt to defend them from injustice; instead the act is perceived as a round of bad luck. So, the criminals are busy plotting their revenge, not internalizing civil norms.

You believe that if you send your copy of book (with key passages highlighted) to the person connected to Batman, Batman will change his ways and Gotham will become much more nice in terms of homicide rate. 

So you are trying to find out Batman's secret identity, and there are 17 possible suspects. Derek Powers looks like a good candidate: he is wealthy, and has a long history of secretly delegating illegal-violence-including tasks to his henchmen; however, his motivation is far from obvious. You estimate P(Derek Powers employs Batman) as 20%. You have very little information about other candidates, like Ferris Boyle, Bruce Wayne, Roland Daggett, Lucius Fox or Matches Malone, so you assign an equal 5% to everyone else.

In this case you should pick Derek Powers as your best guess when forced to name only one candidate (for example, if you forced to send the book to someone today), but also you should be aware that your guess is 80% likely to be wrong. When making expected utility calculations, you should take Derek Powers more seriously than Lucius Fox, but only by 15% more seriously.

In other words, you should take maximum a posteriori probability hypothesis into account while not deluding yourself into thinking that now you understand everything or nothing at all. Derek Powers hypothesis probably isn't true; but it is useful.

Sometimes I find it easier to reframe question from "what hypothesis is true?" to "what hypothesis is probable enough?". Now it's totally okay that your pet theory isn't probable but still probable enough, so doubt becomes easier. Also, you are aware that your pet theory is likely to be wrong (and this is nothing to be sad about), so the alternatives come to mind more naturally.

These "probable enough" hypothesis can serve as a very concise summaries of state of your knowledge when you simultaneously outline the general sort of evidence you've observed, and stress that you aren't really sure. I like to think about it like a rough, qualitative and more System1-friendly variant of Likelihood ratio sharing.

Planning Fallacy

The original explanation of planning fallacy (proposed by Kahneman and Tversky) is about people focusing on a most optimistic scenario when asked about typical one (instead of trying to do an Outside VIew). If you keep the distinction between "probable" and "probable enough" in mind, you can see this claim in a new light.

Because the most optimistic scenario is the most probable and the most typical one, in a certain sense.

The illustration, with numbers pulled out of thin air, goes like this: so, you want to visit a museum.

The first thing you need to do is to get dressed and take your keys and stuff. Usually (with 80% probability) you do this very quick, but there is a weak possibility of your museum ticket having been devoured by an entropy monster living on your computer table.

The second thing is to catch bus. Usually (p = 80%), bus is on schedule, but sometimes it can be too early or too late. After this, the bus could (20%) or could not (80%) get stuck in a traffic jam.

Finally, you need to find a museum building. You've been there before once, so you sorta remember your route, yet still could be lost with 20% probability.

And there you have it: P(everything is fine) = 40%, and probability of every other scenario is 10% or even less. "Everything is fine" is probable enough, yet likely to be false. Supposedly, humans pick MAP hypothesis and then forget about every other scenario in order to save computations.

Also, "everything is fine" is a good description of your plan. If your friend asks you, "so how are you planning to get to the museum?", and you answer "well, I catch the bus, get stuck in a traffic jam for 30 agonizing minutes, and then just walk from here", your friend is going  to get a completely wrong idea about dangers of your journey. So, in a certain sense, "everything is fine" is a typical scenario. 

Maybe it isn't human inability to pick the most likely scenario which should be blamed. Maybe it is false assumption that "most likely == likely to be correct" which contributes to this ubiquitous error.

In this case you would be better off having picked the "something will go wrong, and I will be late", instead of "everything will be fine".

So, sometimes you are interested in the best specimen out of your hypothesis space, sometimes you are interested in a most likely thingy (and it doesn't matter how vague it would be), and sometimes there are no shortcuts, and you have to do an actual expected utility calculation.

Study: In giving charity, let not your right hand...

3 homunq 22 August 2014 10:23PM

So, here's the study¹:

It's veterans' day in Canada. As any good Canadian knows, you're supposed to wear a poppy to show you support the veterans (it has something to do with Flanders Field). As people enter a concourse on the university, a person there does one of three things: gives them a poppy to wear on their clothes; gives them an envelope to carry and tells them (truthfully) that there's a poppy inside; or gives them nothing. Then, after they've crossed the concourse, another person asks them if they want to put donations in a box to support Canadian war veterans.

Who do you think gives the most?

...

If you guessed that it's the people who got the poppy inside the envelope, you're right. 78% of them gave, for an overall average donation of $0.86. That compares to 58% of the people wearing the poppy, for an average donation of $0.34; and 56% of those with no poppy, for an average of $0.15.

Why did the envelope holders give the most? Unlike the no-poppy group, they had been reminded of the expectation of supporting veterans; but unlike the poppy-wearers, they hadn't been given an easy, cost-free means of demonstrating their support.

I think this research has obvious applications, both to fundraising and to self-hacking. It also validates the bible quote (Matthew 6:3) which is the title of this article.

¹ The Nature of Slacktivism: How the Social Observability of an Initial Act of Token Support Affects Subsequent Prosocial Action; K Kristofferson, K White, J Peloza - Journal of Consumer Research, 2014

 

 

 

[LINK] Physicist Carlo Rovelli on Modern Physics Research

5 shminux 22 August 2014 09:46PM

A blog post in Scientific American, well worth reading. Rovelli is a researcher in Loop Quantum Gravity.

Some quotes:

Horgan: Do multiverse theories and quantum gravity theories deserve to be taken seriously if they cannot be falsified?

Rovelli: No.

Horgan: What’s your opinion of the recent philosophy-bashing by Stephen Hawking, Lawrence Krauss and Neil deGrasse Tyson?

Rovelli: Seriously: I think they are stupid in this.   I have admiration for them in other things, but here they have gone really wrong.  Look: Einstein, Heisenberg, Newton, Bohr…. and many many others of the greatest scientists of all times, much greater than the names you mention, of course, read philosophy, learned from philosophy, and could have never done the great science they did without the input they got from philosophy, as they claimed repeatedly.  You see: the scientists that talk philosophy down are simply superficial: they have a philosophy (usually some ill-digested mixture of Popper and Kuhn) and think that this is the “true” philosophy, and do not realize that this has limitations.

Horgan: Can science attain absolute truth?

 

Rovelli: I have no idea what “absolute truth” means. I think that science is the attitude of those who find funny the people saying they know something is absolute truth.  Science is the awareness that our knowledge is constantly uncertain.  What I know is that there are plenty of things that science does not understand yet. And science is the best tool found so far for reaching reasonably reliable knowledge.

Horgan: Do you believe in God?

Rovelli: No.  But perhaps I should qualify the answer, because like this it is bit too rude and simplistic. I do not understand what “to believe in God” means. The people that “believe in God” seem like Martians to me.  I do not understand them.  I suppose this means that I “do not believe in God”. If the question is whether I think that there is a person who has created Heavens and Earth, and responds to our prayers, then definitely my answer is no, with much certainty.

Horgan: Are science and religion compatible?

Rovelli: Of course yes: you can be great in solving Maxwell’s equations and pray to God in the evening.  But there is an unavoidable clash between science and certain religions, especially some forms of Christianity and Islam, those that pretend to be repositories of “absolute Truths.”

 

Weekly LW Meetups

2 FrankAdamek 22 August 2014 03:38PM

Conservation of Expected Jury Probability

9 jkaufman 22 August 2014 03:25PM

The New York Times has a calculator to explain how getting on a jury works. They have a slider at the top indicating how likely each of the two lawyers think you are to side with them, and as you answer questions it moves around. For example, if you select that your occupation is "blue collar" then it says "more likely to side with plaintiff" while "white collar" gives "more likely to side with defendant". As you give it more information the pointer labeled "you" slides back and forth, representing the lawyers' ongoing revision of their estimates of you. Let's see what this looks like.

Initial
Selecting "Over 30"
Selecting "Under 30"

For several other questions, however, the options aren't matched. If your household income is under $50k then it will give you "more likely to side with plaintiff" while if it's over $50k then it will say "no effect on either lawyer". This is not how conservation of expected evidence works: if learning something pushes you in one direction, then learning its opposite has to push you in the other.

Let's try this with some numbers. Say people's leanings are:

income probability of siding with plaintiff probability of siding with defendant
>$50k 50% 50%
<$50k 70% 30%
Before asking you your income the lawyers' best guess is you're equally likely to be earning >$50k as <$50k because $50k's the median [1]. This means they'd guess you're 60% likely to side with the plaintiff: half the people in your position earn over >$50k and will be approximately evenly split while the other half of people who could be in your position earn under <$50k and would favor the plaintiff 70-30, and averaging these two cases gives us 60%.

So the lawyers best guess for you is that you're at 60%, and then they ask the question. If you say ">$50k" then they update their estimate for you down to 50%, if you say "<$50k" they update it up to 70%. "No effect on either lawyer" can't be an option here unless the question gives no information.


[1] Almost; the median income in the US in 2012 was $51k. (pdf)

Memory is Everything

-3 Qwake 22 August 2014 04:48AM

I have found (there is some (evidence)[http://mentalfloss.com/article/52586/why-do-our-best-ideas-come-us-shower] to suggest this) that showers are a great place to think. While I am taking a shower I find that I can think about things in a whole new perspective and it's very refreshing. Well today, while I was taking a shower, an interesting thing popped into my head. Memory is everything. Your memory contains you, it contains your thoughts, it contains your own unique perception of reality. Imagine going to bed tonight and waking up with absolutely no memory of your past. Would you still consider that person yourself? There is no question that our memories/experiences influence our behavior in every possible way. If you were born in a different environment with different stimuli you would've responded to your environment differently and became a different person. How different? I don't want to get involved in the nature/nurture debate but I think there is no question that humans are influenced by their environment. How are humans influenced by our environment? Through learning from our past experiences, which are contained in our memory. I'm getting off topic and I have no idea what my point is... So I propose a thought experiment!

 

Omega the supercomputer gives you 3 Options. Option 1 is for you to pay Omega $1,000,000,000 and Omega will grant you unlimited utility potential for 1 week in which Omega will basically provide to your every wish. You will have absolutely no memory of the experience after the week is up. Option 2 is for Omega to pay you $1,000,000,000 but you must be willing to suffer unlimited negative utility potential for a week (you will not be harmed physically or mentally you will simply experience excruciating pain). You will also have absolutely memory of this experience after the week (your subconscious will also not be affected). Finally, Option 3 is simply to refuse Option 1 and 2 and maintain the status quo.

 

At first glance, it may seem that Option 2 is simply not choosable. It seems insane to subject yourself to torture when you have the option of nirvana. But it requires more thought than that. If you compare Option 1 to Option 2 after the week is up there is no difference between the options except that Option 2 nets you 2 billion dollars compared to Option 1. In both Options you have absolutely no memory of either weeks. The question that I'm trying to put forward in this thought experiment is this. If you have no memory of an experience does that experience still matter? Is it worth experiencing something for the experience alone or is it the memory of an experience that matters? Those are some questions that I have been thinking about lately. Any feedback or criticism is appreciated.

One last thing, if you are interested in the concept and importance of memory two excellent movies on the subject are [Memento](http://www.imdb.com/title/tt0209144/) and [Eternal Sunshine of the Spotless Mind](http://www.imdb.com/title/tt0338013/0). I know they both of these movies aren't scientific but I thought them to be very intriguing and thought provoking.    

Fighting Biases and Bad Habits like Boggarts

29 palladias 21 August 2014 05:07PM

TL;DR: Building humor into your habits for spotting and correcting errors makes the fix more enjoyable, easier to talk about and receive social support, and limits the danger of a contempt spiral. 

 

One of the most reliably bad decisions I've made on a regular basis is the choice to stay awake (well, "awake") and on the internet past the point where I can get work done, or even have much fun.  I went through a spell where I even fell asleep on the couch more nights than not, unable to muster the will or judgement to get up and go downstairs to bed.

I could remember (even sometimes in the moment) that this was a bad pattern, but, the more tired I was, the more tempting it was to think that I should just buckle down and apply more willpower to be more awake and get more out of my computer time.  Going to bed was a solution, but it was hard for it not to feel (to my sleepy brain and my normal one) like a bit of a cop out.

Only two things helped me really keep this failure mode in check.  One was setting a hard bedtime (and beeminding it) as part of my sacrifice for Advent.   But the other key tool (which has lasted me long past Advent) is the gif below.

sleep eating ice cream

The poor kid struggling to eat his ice cream cone, even in the face of his exhaustion, is hilarious.  And not too far off the portrait of me around 2am scrolling through my Feedly.

Thinking about how stupid or ineffective or insufficiently strong-willed I'm being makes it hard for me to do anything that feels like a retreat from my current course of action.  I want to master the situation and prove I'm stronger.  But catching on to the fact that my current situation (of my own making or not) is ridiculous, makes it easier to laugh, shrug, and move on.

I think the difference is that it's easy for me to feel contemptuous of myself when frustrated, and easy to feel fond when amused.

I've tried to strike the new emotional tone when I'm working on catching and correcting other errors.  (e.g "Stupid, you should have known to leave more time to make the appointment!  Planning fallacy!"  becomes "Heh, I guess you thought that adding two "trivially short" errands was a closed set, and must remain 'trivially short.'  That's a pretty silly error.")

In the first case, noticing and correcting an error feels punitive, since it's quickly followed by a hefty dose of flagellation, but the second comes with a quick laugh and a easier shift to a growth mindset framing.  Funny stories about errors are also easier to tell, increasing the chance my friends can help catch me out next time, or that I'll be better at spotting the error just by keeping it fresh in my memory. Not to mention, in order to get the joke, I tend to look for a more specific cause of the error than stupid/lazy/etc.

As far as I can tell, it also helps that amusement is a pretty different feeling than the ones that tend to be active when I'm falling into error (frustration, anger, feeling trapped, impatience, etc).  So, for a couple of seconds at least, I'm out of the rut and now need to actively return to it to stay stuck. 

In the heat of the moment of anger/akrasia/etc is a bad time to figure out what's funny, but, if you're reflecting on your errors after the fact, in a moment of consolation, it's easier to go back armed with a helpful reframing, ready to cast Riddikulus!

 

Crossposted from my personal blog, Unequally Yoked.

Another type of intelligence explosion

15 Stuart_Armstrong 21 August 2014 02:49PM

I've argued that we might have to worry about dangerous non-general intelligences. In a series of back and forth with Wei Dai, we agreed that some level of general intelligence (such as that humans seem to possess) seemed to be a great advantage, though possibly one with diminishing returns. Therefore a dangerous AI could be one with great narrow intelligence in one area, and a little bit of general intelligence in others.

The traditional view of an intelligence explosion is that of an AI that knows how to do X, suddenly getting (much) better at doing X, to a level beyond human capacity. Call this the gain of aptitude intelligence explosion. We can prepare for that, maybe, by tracking the AI's ability level and seeing if it shoots up.

But the example above hints at another kind of potentially dangerous intelligence explosion. That of a very intelligent but narrow AI that suddenly gains intelligence across other domains. Call this the gain of function intelligence explosion. If we're not looking specifically for it, it may not trigger any warnings - the AI might still be dumber than the average human in other domains. But this might be enough, when combined with its narrow superintelligence, to make it deadly. We can't ignore the toaster that starts babbling.

An example of deadly non-general AI

11 Stuart_Armstrong 21 August 2014 02:15PM

In a previous post, I mused that we might be focusing too much on general intelligences, and that the route to powerful and dangerous intelligences might go through much more specialised intelligences instead. Since it's easier to reason with an example, here is a potentially deadly narrow AI (partially due to Toby Ord). Feel free to comment and improve on it, or suggest you own example.

It's the standard "pathological goal AI" but only a narrow intelligence. Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years - i.e. massively reducing human population in the next 49 years. It's a narrow intelligence, so it has access only to a huge amount of human biological and epidemiological research. It must gets its drugs past FDA approval; this requirement is encoded as certain physical reactions (no death, some health improvements) to people taking the drugs over the course of a few years.

Then it seems trivial for it to design a drug that would have no negative impact for the first few years, and then causes sterility or death. Since it wants to spread this to as many humans as possible, it would probably design something that interacted with common human pathogens - colds, flues - in order to spread the impact, rather than affecting only those that took the disease.

Now, this narrow intelligence is less threatening than if it had general intelligence - where it could also plan for possible human countermeasures and such - but it seems sufficiently dangerous on its own that we can't afford to worry only about general intelligences. Some of the "AI superpowers" that Nick mentions in his book (intelligence amplification, strategizing, social manipulation, hacking, technology research, economic productivity) could be enough to cause devastation on their own, even if the AI never developed other abilities.

We still could be destroyed by a machine that we outmatch in almost every area.

Why we should err in both directions

7 owencb 21 August 2014 11:10AM

Crossposted from the Global Priorities Project

This is an introduction to the principle that when we are making decisions under uncertainty, we should choose so that we may err in either direction. We justify the principle, explore the relation with Umeshisms, and look at applications in priority-setting.

Some trade-offs

How much should you spend on your bike lock? A cheaper lock saves you money at the cost of security.

How long should you spend weighing up which charity to donate to before choosing one? Longer means less time for doing other useful things, but you’re more likely to make a good choice.

How early should you aim to arrive at the station for your train? Earlier means less chance of missing it, but more time hanging around at the station.

Should you be willing to undertake risky projects, or stick only to safe ones? The safer your threshold, the more confident you can be that you won’t waste resources, but some of the best opportunities may have a degree of risk, and you might be able to achieve a lot more with a weaker constraint.

The principle

We face trade-offs and make judgements all the time, and inevitably we sometimes make bad calls. In some cases we should have known better; sometimes we are just unlucky. As well as trying to make fewer mistakes, we should try to minimise the damage from the mistakes that we do make.

Here’s a rule which can be useful in helping you do this:

When making decisions that lie along a spectrum, you should choose so that you think you have some chance of being off from the best choice in each direction.

We could call this principle erring in both directions. It might seem counterintuitive -- isn’t it worse to not even know what direction you’re wrong in? -- but it’s based on some fairly straightforward economics. I give a non-technical sketch of a proof at the end, but the essence is: if you’re not going to be perfect, you want to be close to perfect, and this is best achieved by putting your actual choice near the middle of your error bar.

So the principle suggests that you should aim to arrive at the station with a bit of time wasted, but not so much that you won’t miss the train even if something goes wrong.

Refinements

Just saying that you should have some chance of erring in either direction isn’t enough to tell you what you should actually choose. It can be a useful warning sign in the cases where you’re going substantially wrong, though, and as these are the most important cases to fix it has some use in this form.

A more careful analysis would tell you that at the best point on the spectrum, a small change in your decision produces about as much expected benefit as expected cost. In ideal circumstances we can use this to work out exactly where on the spectrum we should be (in some cases more than one point may fit this, so you need to compare them directly). In practice it is often hard to estimate the marginal benefits and costs well enough for this to be useful approach. So although it is theoretically optimal, you will only sometimes want to try to apply this version.

Say in our train example that you found missing the train as bad as 100 minutes waiting at the station. Then you want to leave time so that an extra minute of safety margin gives you a 1% reduction in the absolute chance of missing the train.

For instance, say your options in the train case look like this:

Safety margin (min) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Chance of missing train (%) 50 30 15 8 5 3 2 1.5 1.1 0.8 0.6 0.4 0.3 0.2 0.1

Then the optimal safety margin to leave is somewhere between 6 and 7 minutes: this is where the marginal minute leads to a 1% reduction in the chance of missing the train.

Predictions and track records

So far, we've phrased the idea in terms of the predicted outcomes of actions. Another more well-known perspective on the idea looks at events that have already happened. For example:

These formulations, dubbed 'Umeshisms', only work for decisions that you make multiple times, so that you can gather a track record.

An advantage of applying the principle to track records is that it’s more obvious when you’re going wrong. Introspection can be hard.

You can even apply the principle to track records of decisions which don’t look like they are choosing from a spectrum. For example it is given as advice in the game of bridge: if you don’t sometimes double the stakes on hands which eventually go against you, you’re not doubling enough. Although doubling or not is a binary choice, erring in both directions still works because ‘how often to do double’ is a trait that roughly falls on a spectrum.

Failures

There are some circumstances where the principle may not apply.

First, if you think the correct point is at one extreme of the available spectrum. For instance nobody says ‘if you’re not worried about going to jail, you’re not committing enough armed robberies’, because we think the best number of armed robberies to commit is probably zero.

Second, if the available points in the spectrum are discrete and few in number. Take the example of the bike locks. Perhaps there are only three options available: the Cheap-o lock (£5), the Regular lock (£20), and the Super lock (£50). You might reasonably decide on the Regular lock, thinking that maybe the Super lock is better, but that the Cheap-o one certainly isn’t. When you buy the Regular lock, you’re pretty sure you’re not buying a lock that’s too tough. But since only two of the locks are good candidates, there is no decision you could make which tries to err in both directions.

Third, in the case of evaluating track records, it may be that your record isn’t long enough to expect to have seen errors in both directions, even if they should both come up eventually. If you haven’t flown that many times, you could well be spending the right amount of time -- or even too little -- in airports, even if you’ve never missed a flight.

Finally, a warning about a case where the principle is not supposed to apply. It shouldn’t be applied directly to try to equalise the probability of being wrong in either direction, without taking any account of magnitude of loss. So for example if someone says you should err on the side of caution by getting an early train to your job interview, it might look as though that were in conflict with the idea of erring in both directions. But normally what’s meant is that you should have a higher probability of failing in one direction (wasting time by taking an earlier train than needed), because the consequences of failing in the other direction (missing the interview) are much higher.

Conclusions and applications to prioritisation

Seeking to err in both directions can provide a useful tool in helping to form better judgements in uncertain situations. Many people may already have internalised key points, but it can be useful to have a label to facilitate discussion. Additionally, having a clear principle can help you to apply it in cases where you might not have noticed it was relevant.

How might this principle apply to priority-setting? It suggests that:

  • You should spend enough time and resources on the prioritisation itself that you think some of time may have been wasted (for example you should spend a while at the end without changing your mind much), but not so much that you are totally confident you have the right answer.
  • If you are unsure what discount rate to use, you should choose one so that you think that it could be either too high or too low.
  • If you don’t know how strongly to weigh fragile cost-effectiveness estimates against more robust evidence, you should choose a level so that you might be over- or under-weighing them.
  • When you are providing a best-guess estimate, you should choose a figure which could plausibly be wrong either way.

And one on track records:

  • Suppose you’ve made lots of grants. Then if you’ve never backed a project which has failed, you’re probably too risk-averse in your grantmaking.

Questions for readers

Do you know any other useful applications of this idea? Do you know anywhere where it seems to break? Can anyone work out easier-to-apply versions, and the circumstances in which they are valid?

Appendix: a sketch proof of the principle

Assume the true graph of value (on the vertical axis) against the decision you make (on the horizontal axis, representing the spectrum) is smooth, looking something like this:   pic

The highest value is achieved at d, so this is where you’d like to be. But assume you don’t know quite where d is. Say your best guess is that d=g. But you think it’s quite possible that d>g, and quite unlikely that d<g. Should you choose g?

Suppose we compare g to g’, which is just a little bit bigger than g. If d>g, then switching from g to g’ would be moving up the slope on the left of the diagram, which is an improvement. If d=g then it would be better to stick with g, but it doesn’t make so much difference because the curve is fairly flat at the top. And if g were bigger than d, we’d be moving down the slope on the right of the diagram, which is worse for g’ -- but this scenario was deemed unlikely.

Aggregating the three possibilities, we found that two of them were better for sticking with g, but in one of these (d=g) it didn’t matter very much, and the other (d<g) just wasn’t very likely. In contrast, the third case (d>g) was reasonably likely, and noticeably better for g’ than g. So overall we should prefer g’ to g.

In fact we’d want to continue moving until the marginal upside from going slightly higher was equal to the marginal downside; this would have to involve a non-trivial chance that we are going too high. So our choice should have a chance of failure in either direction. This completes the (sketch) proof.

Note: There was an assumption of smoothness in this argument. I suspect it may be possible to get slightly stronger conclusions or work from slightly weaker assumptions, but I’m not certain what the most general form of this argument is. It is often easier to build a careful argument in specific cases.

Acknowledgements: thanks to Ryan Carey, Max Dalton, and Toby Ord for useful comments and suggestions.

Productivity thoughts from Matt Fallshaw

11 John_Maxwell_IV 21 August 2014 05:05AM

At the 2014 Effective Altruism Summit in Berkeley a few weeks ago, I had the pleasure of talking to Matt Fallshaw about the things he does to be more effective.  Matt is a founder of Trike Apps (the consultancy that built Less Wrong), a founder of Bellroy, and a polyphasic sleeper.  Notes on our conversation follow.

Matt recommends having a system for acquiring habits.  He recommends separating collection from processing; that is, if you have an idea for a new habit you want to acquire, you should record the idea at the time you have it and then think about actually implementing it at some future time.  Matt recommends doing this through a weekly review.  He recommends vetting your collection to see what habits seem actually worth acquiring, then for those habits you actually want to acquire, coming up with a compassionate, reasonable plan for how you're going to acquire the habit.

(Previously on LW: How habits work and how you may control themCommon failure modes in habit formation.)

The most difficult kind of habit for me to acquire is that of random-access situation-response habits, e.g. "if I'm having a hard time focusing, read my notebook entry that lists techniques for improving focus".  So I asked Matt if he had any habit formation advice for this particular situation.  Matt recommended trying to actually execute the habit I wanted as many times as possible, even in an artificial context.  Steve Pavlina describes the technique here.  Matt recommends making your habit execution as emotionally salient as possible.  His example: Let's say you're trying to become less of a prick.  Someone starts a conversation with you and you notice yourself experiencing the kind of emotions you experience before you start acting like a prick.  So you spend several minutes explaining to them the episode of disagreeableness you felt coming on and how you're trying to become less of a prick before proceeding with the conversation.  If all else fails, Matt recommends setting a recurring alarm on your phone that reminds you of the habit you're trying to acquire, although he acknowledges that this can be expensive.

Part of your plan should include a check to make sure you actually stick with your new habit.  But you don't want a check that's overly intrusive.  Matt recommends keeping an Anki deck with a card for each of your habits.  Then during your weekly review session, you can review the cards Anki recommends for you.  For each card, you can rate the degree to which you've been sticking with the habit it refers to and do something to revitalize the habit if you haven't been executing it.  Matt recommends writing the cards in a form of a concrete question, e.g. for a speed reading habit, a question could be "Did you speed read the last 5 things you read?"  If you haven't been executing a particular habit, check to see if it has a clear, identifiable trigger.

Ideally your weekly review will come at a time you feel particularly "agenty" (see also: Reflective Control).  So you may wish to schedule it at a time during the week when you tend to feel especially effective and energetic.  Consuming caffeine before your weekly review is another idea.

When running in to seemingly intractable problems related to your personal effectiveness, habits, etc., Matt recommends taking a step back to brainstorm and try to think of creative solutions.  He says that oftentimes people will write off a task as "impossible" if they aren't able to come up with a solution in 30 seconds.  He recommends setting a 5-minute timer.

In terms of habits worth acquiring, Matt is a fan of speed reading, Getting Things Done, and the Theory of Constraints (especially useful for larger projects).

Matt has found that through aggressive habit acquisition, he's been able to experience a sort of compound return on the habits he's acquired: by acquiring habits that give him additional time and mental energy, he's been able to reinvest some of that additional time and mental energy in to the acquisition of even more useful habits.  Matt doesn't think he's especially smart or high-willpower relative to the average person in the Less Wrong community, and credits this compounding for the reputation he's acquired for being a badass.

Anthropics doesn't explain why the Cold War stayed Cold

5 KnaveOfAllTrades 20 August 2014 07:23PM

(Epistemic status: There are some lines of argument that I haven’t even started here, which potentially defeat the thesis advocated here. I don’t go into them because this is already too long or I can’t explain them adequately without derailing the main thesis. Similarly some continuations of chains of argument and counterargument begun here are terminated in the interest of focussing on the lower-order counterarguments. Overall this piece probably overstates my confidence in its thesis. It is quite possible this post will be torn to pieces in the comments—possibly by my own aforementioned elided considerations. That’s good too.)

I

George VI, King of the United Kingdom, had five siblings. That is, the father of current Queen Elizabeth II had as many siblings as on a typical human hand. (This paragraph is true, and is not a trick; in particular, the second sentence of this paragraph really is trying to disambiguate and help convey the fact in question and relate it to prior knowledge, rather than introduce an opening for some sleight of hand so I can laugh at you later, or whatever fear such a suspiciously simple proposition might engender.)

Let it be known.

II

Exactly one of the following stories is true:

Story One

Recently I hopped on Facebook and saw the following post:

“I notice that I am confused about why a nuclear war never occurred. Like, I think (knowing only the very little I know now) that if you had asked me, at the start of the Cold War or something, the probability that it would eventually lead to a nuclear war, I would've said it was moderately likely. So what's up with that?”


The post had 14 likes. In the comments, the most-Liked explanation was:

“anthropically you are considerably more likely to live in a world where there never was a fullscale nuclear war”

That comment had 17 Likes. The second-most-liked comment that offered an explanation had 4 Likes.

Story Two

continue reading »

Thought experiments on simplicity in logical probability

3 Manfred 20 August 2014 05:25PM

A common feature of many proposed logical priors is a preference for simple sentences over complex ones. This is sort of like an extension of Occam's razor into math. Simple things are more likely to be true. So, as it is said, "why not?"

 

Well, the analogy has some wrinkles - unlike hypothetical rules for the world, logical sentences do not form a mutually exclusive set. Instead, for every sentence A there is a sentence not-A with pretty much the same complexity, and probability 1-P(A). So you can't make the probability smaller for all complex sentences, because their negations are also complex sentences! If you don't have any information that discriminates between them, A and not-A will both get probability 1/2 no matter how complex they get.

But if our agent knows something that breaks the symmetry between A and not-A, like that A belongs to a mutually exclusive and exhaustive set of sentences with differing complexities, then it can assign higher probabilities to simpler sentences in this set without breaking the rules of probability. Except, perhaps, the rule about not making up information.

The question: is the simpler answer really more likely to be true than the more complicated answer, or is this just a delusion? If so, is it for some ontologically basic reason, or for a contingent and explainable reason?

 

There are two complications to draw your attention to. The first is in what we mean by complexity. Although it would be nice to use the Kolmogorov complexity of any sentence, which is the length of the shortest program that prints the sentence, such a thing is uncomputable by the kind of agent we want to build in the real world. The only thing our real-world agent is assured of seeing is the length of the sentence as-is. We can also find something in between Kolmogorov complexity and length by doing a brief search for short programs that print the sentence - this meaning is what is usually meant in this article, and I'll call it "apparent complexity."

The second complication is in what exactly a simplicity prior is supposed to look like. In the case of Solomonoff induction the shape is exponential - more complicated hypotheses are exponentially less likely. But why not a power law? Why not even a Poisson distribution? Does the difficulty of answering this question mean that thinking that simpler sentences are more likely is a delusion after all?

 

Thought experiments:

1: Suppose our agent knew from a trusted source that some extremely complicated sum could only be equal to A, or to B, or to C, which are three expressions of differing complexity. What are the probabilities?

 

Commentary: This is the most sparse form of the question. Not very helpful regarding the "why," but handy to stake out the "what." Do the probabilities follow a nice exponential curve? A power law? Or, since there are just the three known options, do they get equal consideration?

This is all based off intuition, of course. What does intuition say when various knobs of this situation are tweaked - if the sum is of unknown complexity, or of complexity about that of C? If there are a hundred options, or countably many? Intuitively speaking, does it seem like favoring simpler sentences is an ontologically basic part of your logical prior?

 

2: Consider subsequences of the digits of pi. If I give you a pair (n,m), you can tell me the m digits following the nth digit of pi. So if I start a sentence like "the subsequence of digits of pi (10100, 102) = ", do you expect to see simpler strings of digits on the right side? Is this a testable prediction about the properties of pi?

 

Commentary: We know that there is always a short-ish program to produce the sequences, which is just to compute the relevant digits of pi. This sets a hard upper bound on the possible Kolmogorov complexity of sequences of pi (that grows logarithmically as you increase m and n), and past a certain m this will genuinely start restricting complicated sequences, and thus favoring "all zeros" - or does it?

After all, this is weak tea compared to an exponential simplicity prior, for which the all-zero sequence would be hojillions of times more likely than a messy one. On the other hand, an exponential curve allows sequences with higher Kolmogorov complexity than the computation of the digits of pi.

Does the low-level view outlined in the first paragraph above demonstrate that the exponential prior is bunk? Or can you derive one from the other with appropriate simplifications (keeping in mind Komogorov complexity vs. apparent complexity)? Does pi really contain more long simple strings than expected, and if not what's going on with our prior?

 

3: Suppose I am writing an expression that I want to equal some number you know - that is, the sentence "my expression = your number" should be true. If I tell you the complexity of my expression, what can you infer about the likelihood of the above sentence?

 

Commentary: If we had access to Kolmogorov complexity of your number, then we could completely rule out answers that were too K-simple to work. With only an approximation, it seems like we can still say that simple answers are less likely up to a point. Then as my expression gets more and more complicated, there are more and more available wrong answers (and, outside of the system a bit, it becomes less and less likely that I know what I'm doing), and so probability goes down.

In the limit that my expression is much more complex than your number, does an elegant exponential distribution emerge from underlying considerations?

View more: Next