Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread, May 25 - May 31, 2015

1 Gondolinian 25 May 2015 12:00AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

How do we learn from errors?

1 ChristianKl 24 May 2015 09:56PM

Mark Friedenbach's post Leaving LessWrong for a more rational life makes a few criticisms of the way LW approaches rationality. It's not focused enough on empiricism. While he grants that there's lip service payed to empiricism Mark argues that LW isn't empiric enough.

Part of empiricism is learning from errors. How do you deal with learning from your own errors? What was the last substantial errors you made that made you learn and think differently about the issue in question?

Do you have a framework for thinking about the issue of learning through errors? Do you have additional questions regarding the issue of learning through errors that are worth exploring?

A resolution to the Doomsday Argument.

-1 Eitan_Zohar 24 May 2015 05:58PM

A self-modifying AI is built to serve humanity. The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare. To solve the problem, they direct the AI to create billions of simulated humanities in the hope that this will serve as a Schelling point to them, and make their own universe almost certainly simulated.

Plausible?

Prior probabilities and statistical significance

-1 [deleted] 24 May 2015 10:00AM

How does using priors affect the concept of statistical significance? The scientific convention is to use a 5% threshold for significance, no matter whether the hypothesis has been given a low or a high prior probability.

If we momentarily disregard the fact that there might be general methodological issues with using statistical significance, how does the use of priors specifically affect the appropriateness of using statistical significance?

[Link] Persistence of Long-Term Memory in Vitrified and Revived C. elegans worms

17 Rangi 24 May 2015 03:43AM

http://online.liebertpub.com/doi/pdf/10.1089/rej.2014.1636

This is a paper published in 2014 by Natasha Vita-More and Daniel Barranco, both associated with the Alcor Research Center (ARC).

The abstract:

Can memory be retained after cryopreservation? Our research has attempted to answer this long-standing question by using the nematode worm Caenorhabditis elegans (C. elegans), a well-known model organism for biological research that has generated revolutionary findings but has not been tested for memory retention after cryopreservation. Our study’s goal was to test C. elegans’ memory recall after vitrification and reviving. Using a method of sensory imprinting in the young C. elegans we establish that learning acquired through olfactory cues shapes the animal’s behavior and the learning is retained at the adult stage after vitrification. Our research method included olfactory imprinting with the chemical benzaldehyde (C₆H₅CHO) for phase-sense olfactory imprinting at the L1 stage, the fast cooling SafeSpeed method for vitrification at the L2 stage, reviving, and a chemotaxis assay for testing memory retention of learning at the adult stage. Our results in testing memory retention after cryopreservation show that the mechanisms that regulate the odorant imprinting (a form of long-term memory) in C. elegans have not been modified by the process of vitrification or by slow freezing.

[Link] Mainstream media writing about rationality-informed approaches

3 Gleb_Tsipursky 24 May 2015 01:18AM

Wanted to share two articles published in mainstream media, namely Ohio newspapers, about how rationality-informed strategies help people improve their lives.

This one is about improving one's thinking, feeling, and behavior patterns overall, and especially one's highest-order goals, presented as "meaning and purpose."

This one is about using rationality to deal with mental illness, and specifically highlights the strategy of "in what world do I want to live?"

I know about these two articles because I was personally involved in their publication as part of my broader project of spreading rationality widely. What other articles are there that others know about?

[Link] Throwback Thursday: Are asteroids dangerous?

1 Gunnar_Zarncke 23 May 2015 08:00AM

Throwback Thursday: Are asteroids dangerous? by StartsWithABang:

When it comes to risk assessment, there's one type that humans are notoriously bad at: the very low-frequency but high-consequence risks and rewards. It's why so many of us are so eager to play the lottery, and simultaneously why we're catastrophically afraid of ebola and plane crashes, when we're far more likely to die from something mundane, like getting hit by a truck. One of the examples where science and this type of fear-based fallacy intersect is the science of asteroid strikes. With all we know about asteroids today, here's the actual risk to humanity, and it's much lower than anyone cares to admit. -- summary from slashdot.

Weekly LW Meetups

2 FrankAdamek 22 May 2015 03:18PM

This summary was posted to LW Main on May 15th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Communicating via writing vs. in person

4 adamzerner 22 May 2015 04:58AM

There's a lot that I really like about communicating via writing. Communicating in person is sometimes frustrating for me, and communicating via writing addresses a lot of those frustrations:

1) I often want to make a point that depends on the other person knowing X. In person, if I always paused and did the following, it'd add a lot of friction to conversations: "Wait, do you know X? If yes, good, I'll continue. If no, let me think about how to explain it briefly. Or do you want me to explain it in more depth? Or do you want to try to proceed without knowing X and see how it goes?". But if I don't do so, then it risks miscommunication (because the other person may not have the dependency X).

In writing, I could just link to an article. If the other person doesn't have the dependency, they have options. They could try to proceed without knowing X and see how it goes. If it doesn't work out, they could come back and read the link. Or they could read the link right away. And in reading the link, they have their choice of how deeply they want to read. Ie. they could just skim if they want to.

Alternatively, if you don't have something to link to, you could add a footnote. I think that a UI like Medium's side comments is very preferable to putting the footnotes at the bottom of the page. I hope to see this adopted across the internet some time in the next 5 years or so.

2) I think that in general, being precise about what you're saying is actually quite difficult/time consuming*. For example, I don't really mean what I just said. I'm actually not sure how often that it's difficult/time consuming to be precise with what you're saying. And I'm not sure how often it's useful to be precise about what you're saying (or really, more precise...whatever that means...). I guess what I really mean is that it happens often enough where it's a problem. Or maybe just that for me, it happens enough where I find it to be a problem.

Anyway, I find that putting quotes around what I say is a nice way to mitigate this problem.

Ex. It's "in my nature" to be strategic.

The quotes show that the word inside them isn't precisely what I mean, but that it's close enough to what I mean that it should communicate the gist of it. I sense that this communication often happens through empathetic inference.

*I also find that I feel internal and external pressure to be consistent with what I say, even if I know I'm oversimplifying. This is a problem and has negatively effected me. I recently realized what a big problem it is, and will try very hard to address it (or really, I plan on trying very hard but I'm not sure blah blah blah blah blah...).

Note 1: I find internal conversation/thinking as well as interpersonal conversation to be "chaotic". (What follows is rant-y and not precisely what I believe. But being precise would take too long, and I sense that the rant-y tone helps to communicate without detracting from the conversation by being uncivil.) It seems that a lot of other people (much less so on LW) have more "organized" thinking patterns. I can't help but think that that's BS. Well, maybe they do, but I sense that they shouldn't. Reality is complicated. People seem to oversimplify things a lot, and to think in terms of black-white. When you do that, I could see how ones thoughts could be "organized". But when you really try to deal with the complexities of reality... I don't understand how you could simultaneously just go through life with organized thoughts.

Note 2: I sense that this post somewhat successfully communicates my internal thought process and how chaotic it could be. I'm curious how this compares to other people. I should note that I was diagnosed with a mild-moderate case of ADHD when I was younger. But that was largely based off of iffy reporting from my teachers. They didn't realize how much conscious thought motivated my actions. Ie. I often chose to do things that seem impulsive because I judged it to be worth it. But given that my mind is always racing so fast, and that I have a good amount of trouble deciding to pay attention to anything other than the most interesting thing to me, I'd guess that I do have ADHD to some extent. I'm hesitant to make that claim without ever having been inside someone else's mind before though (how incredibly incredibly cool would that be!!!) - appearances could be deceiving.

3) It's easier to model and traverse the structure of a conversation/argument when it's in writing. You could break things into nested sections (which isn't always a perfect way to model the structure, but is often satisfactory). In person, I find that it's often quite difficult for two people (let alone multiple people) to stay in sync with the structure of the conversation. The outcome of this is that people rarely veer away from extremely superficial conversations. Granted, I haven't had the chance to talk to many smart people in real life, and so I don't have much data on how deep a conversation between two smart people could get. My guess is that it could get a lot deeper than what I'm used to, but that it'd be pretty hard to make real progress on a difficult topic without outlining and diagramming things out. (Note: I don't mean "deep as in emotional", I mean "deep as in nodes in a graph")


There are also a lot of other things to say about communicating in writing vs. in person, including:

  • The value of the subtle things like nonverbal communication and pauses.
  • The value of a conversation being continuous. When it isn't, you have to download the task over and over again.
  • How much time you have to think things through before responding.
  • I sense that people are way more careful in writing, especially when there's a record of it (rather than, say PM).

This is a discussion post, so feel free to comment on these things too (or anything else in the ballpark).

Leaving LessWrong for a more rational life

28 Mark_Friedenbach 21 May 2015 07:24PM

You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.

As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

Philosophy as the anti-science...

What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.

A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.

The lens that sees its own flaws...

Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.

I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.

And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.

What next?

How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.

A note about effective altruism…

One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.

Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.

This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.

How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

Where should I send my charitable donations?

Aubrey de Grey's SENS Research Foundation.

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:

  • Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
  • Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
  • B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.

I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.

Addendum regarding unfinished business

I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.

EDIT: Obviously I'll stick around long enough to answer questions below :)

Visions and Mirages: The Sunk Cost Dilemma

-5 OrphanWilde 20 May 2015 08:56PM

Summary

How should a rational agent handle the Sunk Cost Dilemma?

Introduction

You have a goal, and set out to achieve it.  Step by step, iteration by iteration, you make steady progress towards completion - but never actually get any closer.  You're deliberately not engaging in the sunk cost fallacy - at no point does the perceived cost of completion get higher.  But at each step, you discover another step you didn't originally anticipate, and had no priors for anticipating.

You're rational.  You know you shouldn't count sunk costs in the total cost of the project.  But you're now into twice as much effort as you would have originally invested, and have done everything you originally thought you'd need to do, but have just as much work ahead of you as when you started.

Worse, each additional step is novel; the additional five steps you discovered after completing step 6 didn't add anything to predict the additional twelve steps you added after completing step 19.  And after step 35, when you discovered another step, you updated your priors with your incorrect original estimate - and the project is still worth completing.  Over and over.  All you can conclude is that your original priors were unreliable.  Each update to your priors, however, doesn't change the fact that the remaining cost is always worth paying to complete the project.

You are starting to feel like you are caught in a penny auction for your time.

When do you give up your original goal as a mirage?  At what point do you give up entirely?

Solutions

The trivial option is to just keep going.  Sometimes this is the only viable strategy; if your goal is mandatory, and there are no alternative solutions to consider.  There's no guarantee you'll finish in any finite amount of time, however.

One option is to precommit; set a specific level of effort you're willing to engage in before stopping progress, and possibly starting over from scratch if relevant.  When bugfixing someone else's code on a deadline, my personal policy is to set aside enough time at the end of the deadline to write the code from scratch and debug that (the code I write is not nearly as buggy as that which I'm usually working on).  Commitment of this sort can work in situations in which there are alternative solutions or when the goal is disposable.

Another option is to discount sunk costs, but include them; updating your priors is one way of doing this, but isn't guaranteed to successfully navigate you through the dilemma.

Unfortunately, there isn't a general solution.  If there were, IT would be a very different industry.

Summary

The Sunk Cost Fallacy is best described as a frequently-faulty heuristic.  There are game-theoretic ways of extracting value from those who follow a strict policy of avoiding engaging in the Sunk Cost Fallacy which happen all the time in IT - frequent requirement changes to fixed-cost projects are a good example (which can go both ways, actually, depending on how the contract and requirements are structured).  It is best to always have an exit policy prepared.

Related Less Wrong Post Links

http://lesswrong.com/lw/at/sunk_cost_fallacy/ - A description of the Sunk Cost Fallacy

http://lesswrong.com/lw/9si/is_sunk_cost_fallacy_a_fallacy/ - Arguments that the Sunk Cost Fallacy may be misrepresented

http://lesswrong.com/lw/9jy/sunk_costs_fallacy_fallacy/ - The Sunk Cost Fallacy can be easily used to rationalize giving up

ETA: Post Mortem

Since somebody has figured out the game now, an explanation: Everybody who spent time writing a comment insisting you -could- get the calculations correct, and the imaginary calculations were simply incorrect?  I mugged you.  The problem is in doing the calculations -instead of- trying to figure out what was actually going on.  You forgot there was another agent in the system with different objectives from your own.  Here, I mugged you for a few seconds or maybe minutes of your time; in real life, that would be hours, weeks, months, or your money, as you keep assuming that it's your own mistake.

Maybe it is a buggy open-source library that has a bug-free proprietary version you pay for - get you in the door, then charge you money when it's more expensive to back out than to continue.  Maybe it's somebody who silently and continually moves work to your side of the fence on a collaborative project, when it's more expensive to back out than to continue.  Not counting all your costs opens you up to exploitative behaviors which add costs at the back-end.

In this case I was able to mug you in part because you didn't like the hypothetical, and fought it.  Fighting the hypothetical will always reveal something about yourself - in this case, fighting the hypothetical revealed that you were exploitable.

In real life I'd be able to mug you because you'd assume someone had fallen prone to the Planning Fallacy, as you assumed must have happened in the hypothetical.  In the case of the hypothetical, an evil god - me - was deliberately manipulating events so that the project would never be completed (Notice what role the -author- of that hypothetical played in that hypothetical, and what role -you- played?).  In real life, you don't need evil gods - just other people who see you as an exploitable resource, and will keep mugging you until you catch on to what they're doing.

Brainstorming new senses

25 lululu 20 May 2015 07:53PM

What new senses would you like to have available to you?

Often when new technology first becomes widely available, the initial limits are in the collective imagination, not in the technology itself (case in point: the internet). New sensory channels have a huge potential because the brain can process senses much faster and more intuitively than most conscious thought processes.

There are a lot of recent "proof of concept" inventions that show that it is possible to create new sensory channels for humans with and without surgery. The most well known and simple example is an implanted magnet, which would alert you to magnetic fields (the trade-off being that you could never have an MRI). Cochlear implants are the most widely used human-created sensory channels (they send electrical signals directly to the nervous system, bypassing the ear entirely), but CIs are designed to emulate a sensory channel most people already have brain space allocated to. VEST is another example. Similar to CIs, VEST (versatile extra-sensory transducer) has 24 information channels, and uses audio compression to encode sound. Unlike CIs, they are not implanted in the skull but instead information is relayed through vibrating motors on the torso. After a few hours of training, deaf volunteers are capable of word recognition using the vibrations alone, and to do so without conscious processing. Much like hearing, the users are unable to describe exactly what components make a spoken word intelligible, they just understand the sensory information intuitively. Another recent invention being tested (with success) is BrainPort glasses, which send electrical signals through the tongue (which is one of the most sensitive organs on the body). Blind people can begin processing visual information with this device within 15 minutes, and it is unique in that it is not implanted but instead. The sensory information feels like pop rocks at first before the brain is able to resolve it into sight. Niel Harbisson (who is colorblind) has custom glasses which use sound tones to relay color information. Belts that vibrate when facing north give people an sense of north. Bottlenose can be built at home and gives a very primitive sense of echolocation. As expected, these all work better if people start young as children. 

What are the craziest and coolest new senses you would like to see available using this new technology? I think VEST at least is available from Kickstarter and one of the inventors suggested that it could be that it could be programmed to transmit any kind of data. My initial ideas which I heard about this possibility are just are senses that some unusual people already have or expansions on current senses. I think the real game changers are going to be totally knew senses unrelated to our current sensory processing. Translating data into sensory information gives us access to intuition and processing speed otherwise unavailable. 

My initial weak ideas:

  • mass spectrometer (uses reflected lasers to determine the exact atomic makeup of anything and everything)
  • proximity meter (but I think you would begin to feel like you had a physical aura or field of influence)
  • WIFI or cell signal
  • perfect pitch and perfect north, both super easy and only need one channel of information (an smartwatch app?)
  • infrared or echolocation
  • GPS (this would involve some serious problem solving to figure out what data we should encode given limited channels, I think it could be done with 4 or 8 channels each associated with a cardinal direction)

Someone working with VEST suggested:

  • compress global twitter sentiments into 24 channels. Will you begin to have an intuitive sense of global events?
  • encode stockmarket data. Will you become an intuitive super-investor?
  • encode local weather data (a much more advanced version of "I can feel it's going to rain in my bad knee)

Some resources for more information:

 

 

More?

What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy

1 chaosmage 20 May 2015 05:10PM

Epistemic status: Wild guesswork based on half-understood studies from way outside my field. More food for thought than trustworthy information.

tl/dr: Estimates of familial relatedness between people should help promote empathy, so here's how to make them - and might this be useful for Effective Altruism?

The why

I don't know how it is for you, but for me, knowing I'm related to someone makes a specific emotional difference. Scenario: I'm at a big family-and-friends get-together, I meet a guy, we get along. (For clarity, let's assume no sexual tension.) And then we're told we're third cousins via some weird aunt. From the moment I'm told, I feel different towards him. Firm, forthcoming, obliging. Some kind of basic kinship emotion, I guess, noticeable when it shifts on these rare occasions but basically going on, deep down in System 1, every time that emailing a remote uncle feels different from emailing a similarly remote associate.

Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins and likes to point out everyone I've ever had sex with was a cousin of some degree. That similarly remote associate where I don't have that kinship feeling - he's a relative too, just a more distant one. And when I notice that, I get a bit of that kinship feeling too...

With me so far? Here's my thesis: the two human feelings of kinship and empathy are closely connected, and to make one of them more salient is to increase the salience of the other.

I don't think this has been tested properly. A. J. Jacobs, who is running a huge family reunion event in New York this summer, said "some ambitious psychology professor needs to conduct a study about whether we deliver lower electrical shocks to people if we know we’re related" and I think he's exactly right.

Has anybody here not heard of circles of empathy? They're a concept invented by the very cool 19th century rationalist William Edward Hartpole Lecky in his "History of European Morals From Augustus to Charlemagne". Peter Singer summarizes it as follows:

Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle... includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.

There's more to read about this in Peter Singer's "The Expanding Circle" or Steven Pinker's "The Better Angels of Our Nature", but what strikes me about it is contained in that single sentence: The expansion that is described tracks actual genetic relatedness, or Consanguinity. The list goes down a gradient of (expected) genetic relatedness. This makes the size of the circle of empathy seem to depend on a threshold of how related you need to be to someone in order to care about them.

(Note that Becky published his "History of European Morals" - with this inclusion of concern about animals - in 1869, i.e. only ten years after the publication of "On the Origin of Species". There was some animal rights legislation before Darwin, but animal rights as a movement only arose after we knew animals to be our relatives.)

On the other hand, those who would promote empathy have always relied on familial vocabulary, chiefly "brother" and "sister", to refer to people who evidently weren't actual brothers or sisters. Martin Luther King, Jesus, the Buddha, Mandela, Gandhi, they all do this. So maybe it works a bit. Maybe it helps trigger that emotional kinship response and that somehow helps people get along.

Now to see how these emotional responses would arise, we could discuss reciprocal altruism and gene-centered Darwinism and whatnot, but "The Selfish Gene" is required reading anyway and I assume you've done your homework. I'd like to instead go to the second part of my thesis, the one about increasing salience.

Recognizing you're related to somebody does something. (Especially if you have an incest fetish, of course.) I propose that whatever it does increases empathy. And empathy might not be a categorically good thing, but it comes pretty close, at least until you extend it to all food groups. So maybe we could increase empathy among people by pointing out their relatedness. And maybe we can do this more vividly, more strikingly than by simply saying "we're all descended from apes, so we're all related, duh" or by boring the non-nerd majority to death with talk of human genetic clustering and fixation indexes.

So I'd like to revisit that "brothers and sisters" thing from MLK and those other guys. Maybe they shouldn't have used figurative language. Maybe a more lasting feeling of kinship can be created by literal language: By telling people how related they are. Detailed ancestry information is being collected at various Wiki-like sites, but even assuming they'll grow and become less US-centric, they don't go back very far (except around very famous people) and what came before remains guesswork. So let's do some Fermi-ish estimates.

The how

The drop dead amazing Nature Article Modelling the recent common ancestry of all living humans is way too careful and scientific to put an exact number on how long ago the last common ancestor lived, unfortunately. But the mean date their simulations come up with is 1415 BC, which will be approximately 120 generations ago, so let's say really remote people like the Karitiana tribe are, at most, something like 125th degree cousins of all of us. So that's a useful upper bound for the degree of cousinhood between any two arbitrary humans, such as you and me.

The lower bound could be something like 3 - if you and I were that closely related, we'd share a great-great-grandparent and could probably ascertain rather than guess that. With fairly extensive genealogy, the lower bound might go up to around 5 - which is the level where you need to look at 64 ancestors for each of us who lived in the middle of the 19th century and failed to use Facebook. We'd find it hard to ascertain whether your great-great-great-great-grandmother Mary was identical to mine.

There are a lot of special cases where the lower bound can be higher. If both people involved know their family more than 3 generations were deep-rooted peasant folks from two distinct populations, the history books might tell them how many centuries further back are very unlikely to contain a common ancestor. (This will of course be much rarer among descendants of immigrants, like Americans, than it is for citizens of older or more rural countries.) If they're of different ethnicities, castes or classes that wouldn't normally date each other 80 years ago, the lower bound should probably go up a few more generations. If both people involved are Icelanders, they can just look up their last common ancestor in the comprehensive Icelandic family tree. But let's assume you and I don't have any of these special cases, and we're stuck with a lower bound of 3. Now between that and 125, how do we narrow it down?

Turns out the authors of that gorgeous Nature paper don't hand out access to their simulations to random dudes who just email them. So lets see how far we get on the hard way.

In a completely random mating model (where people do not tend to mate with people who happen to live near them, i.e. happen to be descendants of the same people), your number of ancestors doubles with every generation you go back, in a sort of ancestor tree that grows backwards. We're looking for the point where the two ancestor trees first meet. If we assume generations have homogenous lengths (which implies further simplifying assumptions like moms and dads are the same age) and further assume only people from within the same generation have kids with each other, cousins of the Nth degree have a common ancestor N+1 generations ago, and each has 2N+1 ancestors belonging to that generation.

This means that for you and me to be, say, 15th degree cousins, our two sets of 215+1=65536 ancestors have to have one person in common, some 480 years ago, assuming 30 years as mean parenthood age. Of course we each probably have less than 65536 unique ancestors due to... um... "reticulations".

But empirically, it seems that "a pair of modern Europeans living in neighboring populations share around 2–12 genetic common ancestors from the last 1,500 years" and even individuals from opposite ends of Europe will normally have common ancestors if you search back 3000 years (source). That isn't what you get from the simplistic model above - the numbers of ancestors it calculates exceed the world population less than 32 generations (about 800 years) ago. The empirical genetic data from this paper would indicate that it is likely the median first common ancestor between me and anybody in central Europe is somewhere like 1200 years (or 40 generations) ago and any two people anywhere in Europe would probably be at most 100th degree cousins.

Around 600 years ago is a good time to look at, because that's shortly before intercontinental travel started to intricately connect all regions of the world, including genetically. If most of your 600-years-ago ancestors lived outside Europe, you and I might still be <25 degrees cousins - maybe you have some ancestor who left for Europe 300 years ago, leaving siblings behind (your ancestors) and having kids in Europe (mine). Or vice versa. But that kind of thing is unlikely and since we're doing rough estimates I suggest we round that probability down to zero.

In genetic studies, no other continent is anywhere near as well-studied as Europe, so I guess we'll just have to roll with it and assume that other places are about the same as this paper found and the nice exponential drop-off with geographic distance that's the case in Europe is also the case elsewhere. America and Australia as continents of immigrants continue to be a special cases. But for two people with families from, say, West Africa, I'd be comfortable assuming that if they're from roughly the same large region (say around the Bight of Benin) they're probably something like 40th degree cousins and if not, they're still something like 100 degree cousins at least.

It gets only slightly more complicated if the set of ancestors you know - say your four grandparents - are a mix of descendants from different regions or continents. Just add the number of generations between you and them to your expected degree of cousinhood to everybody from that region or continent.

Needless to say these are all wild guesses. I'm basically hoping someone more qualified than me will see this and be horrified enough to go do the job properly.

Now I'm not an American but statistically you probably are, and you might be more interested in know how closely you're related to other Americans - your boss, your sexual partners, or Mel Gibson. The bad news is that as a member of a nation of relatively recent immigrants, and particularly if your ancestors didn't all come from different continents, you have a harder time estimating most recent common ancestors with people than most other people on Earth. The good news, however, is that the data collected at the large ancestry sites ancestry.com, FamilySearch.org, Geni.com and WikiTree.com are all growing fastest in the US-centric part of their "world trees".

For cousinhood between people whose ancestors seem to have lived on entirely seperate continents as far as anyone knows, I think we can only fall back on our upper bound of 125 degrees of cousinhood. Things get fuzzy so far back, the world population was much smaller, and the population of those who have descendants living today is smaller still. Shared ancestry within any particular generation remains unlikely, but over the centuries and millenia, between trade (particularly in slaves), the various empires and the mass rapes of warfare, genes did get mixed around. Again, see that spectacular Nature paper if you still haven't.

Side note: The most recent common ancestor of two arbitrarily chosen people on different continents is likely to be someone who had kids on different continents. So it is probably a very rich person, a sailor or a soldier, i.e. a male. In general, the number of unique males in anybody's ancestor tree will likely be much smaller than the number of unique females. I expect the difference will be sharper in most recent common ancestors of humans from different continents, because women have shorter fertility windows inside which to travel intercontinentally and don't seem to have moved nearly as much as men except as slaves.

The point of all this is simple. Now you can look at somebody and figure she's not only your cousin, you even have a guess as to the degree of cousin she is. I like to do that when I'm angry with people, because for me, it makes a distinct emotional difference. Maybe try if that works for you too.

Relation to the care allocation problem

I suspect this cousinhood thing could be a fairly principled solution to the problem of how to allocate caring between humans and animals, which Yvain/Scott laid out in a recent SSC post. Why not go by actual (known or estimated) blood relations, and privilege closer relatives over more distant ones?

Our last common ancestor with chimps was something like 5 to 6 million years ago, so our ancestor trees merge about 250000 (human) generations ago, making chimps something like quarter-million-degrees-cousins of all of us. Generations get a lot shorter further back, so our last common ancestor with cattle and dogs, about 92 million years ago, may be 30 million generations ago. Birds would be much more distant, our last common ancestor with them was around 310 million years ago, and so forth. (Richard Dawkins The Ancestor's Tale has much more on this.) For me, this maps rather nicely onto my intuitive prejudices as to how much I should care about which creatures. It fails to map my caring for plants far more than I care for bacteria, but EA has nothing to improve on in that department.

If EA has to have impartiality in the sense that your neighbor can't be more important to you than a tribesman in Mongolia, this isn't EA. Quoth Yvain:

allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it.

So anybody trying to grow EA might want to make that step easier. Maybe a "closeness multiplier" on units of caring works better than a series of unprincipled exceptions, and still gets across the idea that units of caring are to be distributed between everybody (or everybody's QALYs), if unevenly. And then to become more impartial would be to have that multiplier approach 1.

And if that were the case, my personal preference for how to design that multiplier would be that it shouldn't rely on arbitrary constructs like citizenships. Maybe if EAs want to find a principled solution to the care allocation problem, consanguinity should be one of the options.

[Link] A Darwinian Response to Sam Harris’s Moral Landscape Challenge

1 TheSurvivalMachine 20 May 2015 01:44PM

I noticed that there has been some earlier discussion about Sam Harris’s Moral Landscape Challenge here at LW. As a writer on the Swedish politico-philosophical blog The Inverted Fable of Reality, I would like to share a response to the challenge, written by our main contributor, which I believe is interesting to read even if you are not familiar with The Moral Landscape or its content. See this link for the response and a short explanation of the challenge.

The response takes a different approach to most responses to the challenge. It is divided into four parts and starts by asking which ethic is most compatible with science and reality and finally tries to answer this question.

What Would You Do If You Only Had Six Months To Live?

9 Sable 20 May 2015 12:52AM

Recently, I've been pondering situations in which a person realizes, with (let's say) around 99% confidence, that they are going to die within a set period of time.

The reason for this could be a kind of cancer without any effective treatment, an injury of some kind, or a communicable disease or virus (such as Ebola). More generally, the simple fact that until Harry Potter-Evans-Verres makes the Philosopher's Stone available to us muggles, we're all going to die eventually makes this kind of consideration valuable.

Let's say that you felt ill, and decided to visit the doctor.  After the appropriate tests by the appropriate medical professionals, an old man with a kind face tells you that you have brain cancer.  It is inoperable (or the operation has less than a 1% success rate) and you are given six months to live.  This kindly old doctor adds that he is very sorry, and gives you a prescription for something to deal with the symptoms (at least for a while).

Furthermore, you understand something of probability, and so while you might hope for a miracle, you know better than to count on one.  Which means that even if there exists a .0001% chance you'll live for another 50 years, you have to act as though you're only going to live another six months.

What should you do?

The first answer I thought of was, "go skydiving," which is a cheeky shorthand for trying to enjoy your own life as much as you can until you die.  Upon reflection, however, that seems like an awfully hedonistic answer, doesn't it?  Given this philosophy, you should gorge yourself on donuts, spend your life's savings on expensive cars and prostitutes, and die with a smile on your face.

Something doesn't seem quite right about this approach.  For one, it completely ignores things like trying to take care of the people close to you that you're leaving behind, but even if you're a friendless orphan it doesn't make sense to live like that.  Dopamine is not happiness, and feeling alive isn't necessarily what life is about.  I took a university course centered around Aristotle's Nichomachean Ethics, and one of the examples we used to distinguish a "happy" life from a "well-spent" life was that of the math professor who spends her days counting blades of grass.  While counting those blades of grass might make her happiest, she is still wasting her life and potential.  Likewise, the person who spends their short remaining months in self-indulgent indolence is wasting a chance to do something - what, I'm not quite sure, but still something worthwhile.

The second answer I thought of seems to be the reasonable one - spend your six months preparing yourself and your loved ones for your inevitable demise.  There are things to get in order, funeral arrangements to make, a will to update, and then there's making sure your dependents are taken care of financially.  You never thought dying involved so much paperwork!  Also, you might consider making peace with whatever beliefs you have about the world (religious or not), and trying to accept the end so you can enjoy what time you have left.

This seems to be the technically correct answer to me - the kind of answer that is consistent with a responsible, considerate individual faced with such a situation.  However, much like the ten commandments, the kind of morality that this approach shows seems to be a bare-minimum morality.  The kind of morality expressed by "Thou Shalt Not Kill," rather than the kind of over-and-above morality expressed by "Thou Shalt Ensure No One Shall Ever Die Again, Ever" which seems to be popular on LessWrong and in the Effective Altruism community.  Or at the very least, seems to be expressed by Mr. Yudkowsky.

So I started wondering - what exactly would someone who judges morality by expected utility and who subscribes to an over-and-above approach do with the knowledge that they were going to die?

There's an old George Carlin joke about death:

But you can entertain and the only reason I suggest you can something to do with the way you die is a little known...and less understood portion of death called..."The Two Minute Warning." Obviously, many of you do not know about it, but just as in football, two minutes before you die, there is an audible warning: "Two minutes, get your **** together" and the only reason we don't know about it is 'cause the only people who hear it...die! And they don't have a chance to explain, you know. I don't think we'd listen anyway.

But there is a two minute warning and I say use those two minutes. Entertain. Uplift. Do something. Give a two minute speech. Everyone has a two minute speech in them. Something you know, something you love. Your vacation, man...two minutes. Really do it well. Lots of feeling, lots of spirit and build- wax eloquent for the first time. Reach a peak. With about five seconds left, tell them, "If this is not the truth, may God strike me dead!' THOOM! From then on, you command much more attention.

As usual with Mr. Carlin's humor, there is a very interesting idea hidden in the humor.  Here, the idea is this: There is power in knowing when you will die.  Note that this isn't just having nothing left to lose - because people who have nothing left to lose often still have their lives.

My third idea, attempting to synthesize all of this, has to do with self-immolation.  The idea of setting yourself on fire as an act of political protest.  Please note that I am not recommending that anyone do this (cough, any lawyers listening, cough).

It's just that martyrdom is so much more palatable a concept when you know you're going to die anyway.  Instead of waiting for the cancer to kill you, why shouldn't you sell your life for something more valuable?  I'm not saying don't make arrangements for your death, because you should, but if you can use your death to galvanize people to action, shouldn't you?  In Christopher Nolan's Batman Begins, the deaths of Thomas and Martha Wayne were the catalyst that caused Gotham to rejuvenate itself from the brink of economic collapse.  If your death could serve a similar purpose, and you are committed to making the world a better place...

And maybe you don't have to actually commit suicide by criminal (or cop, or fire, etc...) but the risk-reward calculation for any extremely ethical but extremely dangerous activity has changed.  You could volunteer to fight Ebola in Africa, knowing that if you catch it, you'll only be dying a few months ahead of schedule.  You could try to videotape the atrocities committed by some extremist group and post it on the internet.  And so on.

In summary, it seems to me that people don't tend to think about dying as an act, as something you do, instead of as something that happens to you.  It's a lot like breathing: generally involuntary, but you still have a say in exactly when it happens.  I'm not saying that everyone should martyr themselves for whichever cause they believe in.  But if you happen to be told that you're already dying...from the standpoint of expected utility, becoming a martyr makes a lot more sense.  Which isn't exactly intuitive, but it's what I've come up with.

Now pretend that the kindly old doctor has shuffled into the room, blinking as he shuffles a few papers.  "I'm very sorry," he says, "But you've only got about 70 years to live..."

Log-normal Lamentations

11 Thrasymachus 19 May 2015 09:12PM

[Morose. Also very roughly drafted.]

Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.

There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1

normal

Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.

Look at our thick-tailed works, ye average, and despair! 2

One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.

Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.

Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.

A shattered visage

Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields': Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex ante extremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.

So there are a few ways an Effective Altruist mindset can depress our egos:

  1. It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
  2. ‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
  3. (Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
  4. Many of these fields have ‘lottery-like’ characteristics where ex ante and ex post value diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.

What remains besides

I haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.

In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4

Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5

‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6

Notes:

  1. As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free).
  2. I wonder how much this post is a monument to the grasping vaingloriousness of my character…
  3. Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA.
  4. Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases.
  5. Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them.
  6. cf. Max Ehrmann put it well:

    … If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.

    Enjoy your achievements as well as your plans. Keep interested in your own career, however humble…

The Best Popular Books on Every Subject

13 iarwain1 19 May 2015 01:02AM

I enjoy reading popular-level books on a wide variety of subjects, and I love getting new book recommendations. In the spirit of lukeprog's The Best Textbooks on Every Subject, can we put together a list of the best popular books on every subject?

Here's what I mean by popular-level books:

  • Written very well and clearly, preferably even entertaining.
  • Does not require the reader to write anything (e.g., practice problems) or do anything beyond just reading and thinking, except perhaps on very rare occasions.
  • Cannot be "heavy" reading that requires the reader to proceed slowly and carefully and/or do lots of heavy thinking.
  • Can be understood by anyone with a decent high school education (not including calculus). However, sometimes this requirement can be circumvented, if the following additional criteria are met:
    • There must be other books on this list that cover all the prerequisite information.
    • When you suggest the book, list any prerequisites.
    • There shouldn't be more than 2 or 3 prerequisites.
Textbooks are actually ok, as long as they meet all the above criteria.

I'm going to start off by also requiring the following, as per lukeprog. But if people prefer I might relax these:
  1. Post the title of your favorite book on a given subject.
  2. You must have read at least two other books on that same subject.
  3. You must briefly name the other books you've read on the subject and explain why you think your chosen book is superior to them.
ETA: If you really liked a book but didn't read alternatives, an easy way to fulfill these extra requirements is to look at reviews at Amazon, Goodreads, or similar. Usually you can find reviews that compare the book to one or more alternatives.

Finally, the purpose of this list is to try to be as comprehensive as possible. Copying books that have already been recommended on other lists is therefore to be encouraged, even if you yourself haven't read those books.

ETA 2: It seems everybody has their own ideas about what should be the criteria for this list. So how about everybody just add in books using whatever criteria they would prefer for a list of "The Best Popular Books on Every Subject".

We Should Introduce Ourselves Differently

47 NancyLebovitz 18 May 2015 08:48PM

I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacythe affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.

While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.

Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."

The ganch gamble

7 Romashka 18 May 2015 06:43PM

[Translated from Yu. V. Pukhnatchov, Yu. P. Popov. *Mathematics without formulae*. - Moscow. - 'Stoletie'. - 1995. - pp. 404-405. All mistakes are my own.]

The East is famous for her legends... They say that once upon a time, in a certain town, there lived two well-known carvers of ganch (alabaster that hasn't quite set yet.) And their mastery was so great, and their ornaments were so delightful, that the people simply could not decide, which one is more skillful.

And so a contest was devised. A room of a house just built, which was to be decorated with carvings, was partitioned into two halves by a [nontransparent] curtain. The masters went in, each into his own place, and set to work.

And when they finished and the curtain was removed, the spectators' awe knew no bounds...

continue reading »

Strategies and tools for getting through a break up

24 lululu 18 May 2015 06:01PM

Background:

I was very recently (3 weeks now) in a relationship that lasted for 5.5 years. My partner had been fantastic through all those years and we were suffering no conflict, no fights, no strain or tension. My partner also was prone to depression, and is/was going through an episode of depression. I am usually a major source of support at these times. Six months ago we opened our relationship. I wasn't dating anyone (mostly due to busy-ness), and my partner was, though not seriously. I felt him pulling away somewhat, which I (correctly) attributed mostly to depression and which nonetheless caused me some occasional moments of jealousy. But I was overall extremely happy with this relationship, very committed, and still very much in love as well. It was quite a surprise when my partner broke up with me one Wednesday evening. 

After we had a good cry together, the next morning I woke up and immediately started researching what the literature said about breaking up. My goals were threefold:

 

  1. Stop feeling so sad in the immediate moment
  2. "Get over" my partner
  3. Internalize any gains I had made over the course of our relationship or any lessons I had learned from the break up

 

I made most of my gains in the first few days, by day 3 I was 60% over it. Two weeks later I was 99.5% over the relationship, with a few hold-over habits and tendencies (like feeling responsible for improving his emotional state) which are currently too strong but which will serve me well in our continuing friendship. My ex, on the other hand (no doubt partially due to the depression) is fine most of the time but unpredictably becomes extremely sad for hours on end. Originally this was guilt at having hurt me but now it is mostly nostalgia+isolation based. I hope to continue being close friends and I've been doing my best to support him emotionally, at the distance of a friend. At the same time, I've started semi-seriously dating a friend who has had a crush on me for some time, and not in a rebound way. Below are the states of mind and strategies that allowed me to get over it, fast and with good personal growth. 

Note: mileage may vary. I have low neuroticism and a slightly higher than average base level of happiness. You might not get over the relationship in 2 weeks, but your getting-over-it will certainly be sped up from their default speed.

 

Strategies (in order of importance)

1. Decide you don't want to get back in the relationship. Decide that it is over and given the opportunity, you will not get back with this person. If you were the breaker-upper, you can skip this step.

Until you can do this, it is unlikely that you will get over it. It's hard to ignore an impulse that you agree with wholeheartedly. If you're always hoping for an opportunity or an argument or a situation that will bring you back together, most of your mental energy will go towards formulating those arguments, planning for that situation, imagining that opportunity. Some of the below strategies can still be used, but spend some serious time on this first one. It's the foundation of everything else. There are some facts that can help you convince the logical part of brain that this is the correct attitude. 

  • People in on-and-off relationships are less satisfied, feel more anxiety about their relationship status, and continue to cycle on-and-off even after couples add additional constraints like cohabitation or marriage
  • People in tumultuous relationships are much less happy than singles
  • Wanting to stay in a relationship is reinforced by many biases (status quo bias, ambiguity effect, choice supportive bias, loss aversion, mere-exposure effect, ostrich effect). For someone to break through all those biases and end things, they must be extremely unhappy. If your continuing relationship makes someone you love extremely unhappy, it is a disservice again to capitalize on those biases in a moment of weakness and return to the relationship.
  • Being in a relationship with someone who isn't excited about and pleased by you is settling for an inferior quality of relationship. The amazing number of date-able people in the world means settling for this is not an optimal decision. Contrast this to a tribal situation where replacing a lost mate was difficult or impossible. All these feelings of wanting to get back together evolved in a situation of scarcity, but we live in a world of plenty. 
  • Intermittent rewards are the most powerful, so an on-again-off-again relationship has the power to make you commit to things you would never commit to given a new relationship. The more hot-and-cold your partner is, the more rewarding the relationship seems and the less likely you are to be happy in the long term. Only you can end that tantalizing possibility of intermittent rewards by resolving not to partake if the opportunity arises. 
  • Even if some extenuating circumstance could explain away their intention to break up (depression, bipolar, long-distance, etc), it is belittling to your ex-partner to try to invalidate their stated feelings. Do not fall into the trap of feeling that you know more about a person's inner state than they do. Take it at face value and act accordingly. Even if this is only a temporary state of mind for them, it is unlikely that they will never ever again be in the same state of mind. 
More arguments depend on your situation. Like leftover french fries, very few relationships are as good when you try to revive them, it's better just to get new french fries. 


 

2. Talk to other people about the good things that came of your break-up.  (This can also help you arrive at #1, not wanting to get back together)

I speculate that benefits from this come from three places. First, talking about good thinks makes you notice good things and talking in a positive attitude makes you feel positive. Second, it re-emphasizes to your brain that losing your significant other does not mean losing your social support network. Third, it acts as a mild commitment mechanism - it would be a loss of face to go on about how great you're doing outside the relationship and later have to explain you jumped back in at the first opportunity.

You do not need to be purely positive. If you are feeling sadness, it sometimes helps to talk about this. But don't dwell only on the sadness when you talk. When I was talking to my very close friends about all aspects of my feelings, I still tried to say two positive things for every negative thing. For example: "It was a surprise, which was jarring and unpleasant and upended my life plans in these ways. But being a surprise, I didn't have time to dread and dwell on it beforehand. And breaking up sooner is preferable to a long decline in happiness for both parties, so its better to break up as soon as it becomes clear to either party that the path is headed downhill, even if it is surprising to the other party."

Talk about the positives as often as possible without alienating people. The people you talk to do not need to be serious close friends. I spend a collective hour and a half talking to two OKCupid dates about how many good things came from the break up. (Both dates had been scheduled before actually breaking up, both people had met me once prior, and both dates went surprisingly well due to sympathy, escalating self-disclosure, and positive tone. I signaled that I am an emotionally healthy person dealing well with an understandably difficult situation). 

If you feel that you don't have any candidates for good listeners either because the break up was due to some mistake or infidelity of yours, or because you are socially isolated/anxious, writing is an effective alternative to talking. Study participants recovered quicker when they spent 15 minutes writing about the positive aspects of their break up, participants with three 15 minute sessions did better still. And it can benefit anyone to keep a running list of positives to can bring up out in conversation. 

 

3. Create a social support system

Identify who in your social network can still be relied on as a confidant and/or a neutral listener. You would be surprised at who still cares about you. In my breakup, my primary confidant was my ex's cousin, who also happens to be  my housemate and close friend. His mom and best friend, both in other states, also made the effort to inquire about my state of mind. Most of the time, even people who you consider your partner's friends still feel enough allegiance to you and enough sympathy to be good listeners and through listening they can become your friends. 

If you don't currently have a support system, make one! OKCupid is a great resource for meeting friends outside of just dating, and people are way way more likely to want to meet you if you message them with a "just looking for friends" type message. People  you aren't currently close to but who you know and like can become better friends if you are willing to reveal personal/vulnerable stories. Escalating self-disclosure+symmetrical vulnerability=feelings of friendship. Break ups are a great time for this to happen because you've got a big vulnerability, and one which almost everyone has experienced. Everyone has stories to share and advice to give on the topic of breaking up. 

 

4. Intentionally practice differentiation

One of the most painful parts of a break up is that so much of your sense-of-self is tied into your relationship. You will be basically rebuilding your sense of self. Depending on the length and the committed-ness of the relationship, you may be rebuilding it from the ground up. Think of this as an opportunity. You can rebuild it an any way you desire. All the things you used to like before your relationship, all the interests and hobbies you once cared about, those can be reincorporated into your new, differentiated sense of self. You can do all the things you once wished you did.

Spend at least 5 minutes thinking about what your best self looks like. What kind of person do you wish to be? This is a great opportunity to make some resolutions. Because you have a fresh start, and because these resolutions are about self-identification, they are much more likely to stick. Just be sure to frame them in relation to your sense-of-self: not 'I will exercise,' instead 'I'm a fit active person, the kind of person who exercises' not 'I want to improve my Spanish fluency' but 'I'm a Spanish speaking polygot, the kind of person who is making an big effort to become fluent.'

Language is also a good tool to practice differentiation. Try not to use the word "we," "us," of "our," even in your head. From now on, it is "s/he and I," "me and him/her," or "mine and his/hers." Practice using the word "ex" a lot. Memories are re-formulated and overwritten each time we revisit them, so in your memories make sure to think of you two as separate independent people and not as a unit.  

 

5. Make use of the following mental frameworks to re-frame your thinking:

Over the relationship vs. over the person

You do not have to stop having romantic, tender, or lustful feelings about your ex to get over the relationship. Those type of feelings are not easily controlled, but you can have those same feelings for good friends or crushes without it destroying your ability to have a meaningful platonic relationship, why should this be different?

Being over the relationship means: 

 

  • Not feeling as though you are missing out on being part of a relationship.
  • Not dwelling/ruminating/obsessing about your ex-partner (includes both positive, negative and neutral thoughts "they're so great" and "I hate them and hope they die" and "I wonder what they are up to". 
  • Not wishing to be back with your ex-partner.
  • Not making plans that include consideration of your ex-partner because these considerations are no longer important (this includes considerations like "this will make him/her feel sorry I'm gone," or "this will show him/her that I'm totally over it")
  • Being able to interact with people without your ex-partner at your side and not feel weird about it, especially things you used to do together (eg. a shared hobby or at a party)
  • In very lucky peaceful-breakup situations, being able to interact with your ex-partner and maybe even their current romantic interests without it being too horribly weird and unpleasant.

 

On the other hand, being over a person means experiencing no pull towards that person, romantic, emotional, or sexual. If your break up was messy, you can be over the person without being over the relationship. This is often when people turn to messy and unsatisfying rebound relationships. It is far far more important to be over the relationship, and some of us (me included) will just have to make peace with never being over the person, with the help of knowing that having a crush on someone does not necessarily have the power to make you miserable or destroy your friendship. 

Obsessive thinking and cravings

If you used a brain scanner to look at a person who has been recently broken up with, and then you used the same brain scanner to look at someone who recently sobered up from an addictive drug, their brain activity would be very similar. So similar, in fact, that some neurologists speculate that addiction hijacks the circuits for romantic obsession (there is a very plausible evolutionary reason for romantic obsession to exist in early human tribal societies. Addiction, less so). 

In cases of addiction/craving, you can't just force your mind to stop thinking thoughts you don't like. But you can change your relationship with those thoughts. Recognize when they happen. Identify them as a craving rather than a true need. Recognize that, when satisfied, cravings temporarily diminish and then grow stronger (you've rewarded your brain for that behavior). These are thoughts without substance. The impulse they drive you towards will increase, rather than decrease, unpleasant feelings. 

When I first broke up, I had a couple very unpleasant hours of rumination, thinking uncontrollably about the same topics over and over despite those topics being painful. At some point I realized that continuing to merely think about the break up was also addictive. My craving circuits just picked the one set of thoughts I couldn't argue against so that my brain could go on obsessively dwelling without me being able to pull a logic override. These thoughts SEEM like goal oriented thinking, they FEEL productive, but they are a wolf in sheep's clothing.

In my specific case, my brain was concern trolling me. Concern trolling on the internet is when someone expresses sympathy and concern while actually having ulterior motives (eg on a body-positive website, fat shaming with: "I'm so glad you're happy but I'm concerned that people will think less of you because of your weight"). In my case, I was worrying about my ex's depression and his state of mind, which are very hard thoughts to quash. Empathy and caring are good, right? And he really was going through a hard time. Maybe I should call and check up on him.... My brain was concern trolling me. 

Depending on how your relationship ended, your brain could be trolling in other ways. Flaming seems to be a popular set of unstoppable thoughts. If you can't argue with the thought that the jerk is a horrible person, then THAT is the easiest way for your brain's addictive circuits to happily go on obsessing about this break up. Nostalgia is also a popular option. If the memories were good, then it's hard to argue with those thoughts. If you're a well trained rationalist, you might notice that you are feeling confused and then burn up many brain cycles trying to resolve your confusion by making sense of a fact, despite it not being a rational thing. Your addictive circuits can even hijack good rationalist habits. Other common ruminations are problem solving, simulating possible futures, regret, counter-factual thinking. 

As I said, you can't force these parts of your brain to just shut up. That's not how craving works. But you can take away their power by recognizing that all your ruminating is just these circuits hijacking your normal thought process. Say to yourself "I feeling an urge to call and yell at him/her, but so what. Its just a meaningless craving."

What you lose

There is a great sense of loss that comes with the end of a relationship. For some people, it is a similar feeling to actually being in mourning. Revisiting memories becomes painful, things you used to do together are suddenly tinged with sadness. 

I found it helpful to think of my relationship as a book. A book with some really powerful life-changing passages in the early chapters, a good rising action, great characters. A book which made me a better person by reading it. But a book with a stupid deus ex machina ending that totally invalidated the foreshadowing in the best passages. Finishing the book can be frustrating and saddening, but the first chapters of book still exist. Knowing that the ending sucks isn't going to stop the first chapters from being awesome and entertaining and powerful. And I could revisit those first chapters any time I liked. I could just read my favorite parts without needing to read the whole stupid ending. 

You don't lose your memories. You don't lose your personal growth. Any gains you made while you were with someone, anything new that they introduced you to, or helped you to improve on, or nagged at you till you had a new better habit, you get to keep all of those. That show you used to watch together, it is still there and you still get to watch it and care about it without him/her. The bar you used to visit together is still there too. All those photos are still great pictures of both of you in interesting places. Depending on the situation of the break up, your mutual friends are still around. Even your ex still exists and is still the same person you liked before, and breaking up doesn't mean you'll never see them again unless that's what you guys want/need. 

The only thing you definitely lose at the end of a relationship is the future of that relationship. You are losing something that hasn't happened yet, something which never existed. The only thing you are losing is what you imagined someday having. It's something similar to the endowment effect: you assumed this future was yours so you assigned it a lot of value. But it never was yours, you've lost something which doesn't exist. It's still a painful experience, but realizing all of this helped me a lot. 

Which ideas from LW would you most like to see spread?

13 NancyLebovitz 18 May 2015 02:12PM

My favorite is that people get credit for updating based on evidence.

The more common reaction is for people to get criticized (by themselves and others) for not having known the truth sooner. 

Open Thread, May 18 - May 24, 2015

3 Gondolinian 18 May 2015 12:01AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New LW Meetup: Oslo

4 FrankAdamek 15 May 2015 05:23PM

This summary was posted to LW Main on May 8th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Subsuming Purpose, Part II: Solving the Solution

4 OrphanWilde 14 May 2015 07:25PM

Summary: It's easy to get caught up in solving the wrong problems, solving the problems with a particular solution instead of solving the actual problem.  You should pay very careful attention to what you are doing and why.

I'll relate a seemingly purposeless story about a video game to illustrate:

I was playing Romance of the Three Kingdoms some years ago, and was trying to build the perfect city.  (The one city I ruled, actually.)  Enemies kept attacking, and the need to recruit troops was slowing my population growth (not to mention deliberate sabotage by my enemies), so eventually I came to the conclusion that I would have to conquer the map in order to finish the job.  So I conquered the map.  And then the game ending was shown, after which, finally, I could return to improving cities.

The game ending, however, startled me out of continuing to play: My now emperor was asked by his people to improve the condition of things (as things were apparently terrible), and his response was that he needed to conquer the rest of Asia first, to ensure their security.

My initial response was outrage at how the game portrayed events, but I couldn't find a fault in "his" response; it was exactly what I had been doing.  Given the rest of Asia, indeed the rest of the world, that would be exactly what I would have done had the game continued past that point, given that threats to the peace I had established still existed.  I had already conquered enemies who had never offered me direct threat, on the supposition that they would, and the fact that they held tactically advantageous positions.

It was an excellent game which managed to point out that I have failed in my original purpose in playing the game.  My purpose was subsumed by itself, or more particularly, a subgoal.  I didn't set out to conquer the map.  I lost the game.  I achieved the game's victory conditions, yes, but failed my own.  The ending, the exact description of exactly how I had failed and how my reasoning led to a conclusion I would have dismissed as absurd when I began, was so memorable it still sticks in my mind, years later.

My original purpose was subsumed.  By what, exactly, however?

By the realities of the game I was playing, I could say, if I were to rationalize my behavior; I wanted to improve all the cities I owned, but at no point until I had conquered the entire map could I afford to.  At each point in the game, there was always one city that couldn't be reliably improved.  The AI didn't share my goals; responding to force with force, to sabotage with sabotage, offered no penalties to the AI or its purposes, only to mine.  But nevertheless, I had still abandoned my original goals.  The realities of the game didn't subsume my purpose, which was still achievable within its constraints.

The specific reasons my means subsumed my ends may be illustrative: I inappropriately generalized.  I reasoned as if my territory were an atomic unit.  The risks incurred at my borders were treated as being incurred across the whole of my territory.  I devoted my resources - in particular my time - into solving a problem which afflicted an ever-decreasing percentage of that territory.  But even realizing that I was incorrectly generalizing wouldn't have stopped me; I'd have reasoned that the edge cities would still be under the same threat, and I couldn't actually finish my task until I finished my current task first.

Maybe, once my imaginary video game emperor had finally finished conquering the world, he'd have finally turned to the task of improving things.  Personally, I imagine he tripped and died falling down a flight of stairs shortly after conquering imaginary-China, and all of his work was undone in the chaos that ensued, because it seems the more poetic end to me.

A game taught me a major flaw in my goal-oriented reasoning.

I don't know the name for this error, if it has a name; internally, I call it incidental problem fixation, getting caught up in solving the sub-problems that arise in trying to solve the original problem.  Since playing, I've been very careful, each time a new challenge comes up in the course of solving an overall issue, to re-evaluate my priorities, and to consider alternatives to my chosen strategy.  I still have something of an issue with this; I can't count the number of times I've spent a full workday on a "correct" solution to a technical issue (say, a misbehaving security library) that should have taken an hour.  But when I notice that I'm doing this, I'll step away, and stop working on the "correct" solution, and return to solving the problem I'm actually trying to solve, instead of getting caught up in all the incidental problems that arose in the attempt to implement the original solution.

ETA: Link to part 1: http://lesswrong.com/lw/e12/subsuming_purpose_part_1/

How to come to a rational believe about whether someone has a crush on yo

-3 necate 14 May 2015 12:10PM

If you have a crush on someone you usually want to find out if they have one on you too. In my opinion outright asking them is often not a good solution, because if they don't have a crush on you yet it decreases the chance of this ever happening if they know you have one. This believe is based on what I read about love psychology. Hovever I don't really want to discuss the option of outright asking them in this thread, therefore I have not elaborated further how I got to this believe. 

The alternative to asking them is trying to interpret signals that they might give you. However to know how many signals you need before you should believe that they are in love with you, you would need the prior. I have not been able to find anything about the prior of someone being in love with you. Therefore my Idea is to do a survey in order to find out how likely it is that a person you know has a crush on you. The plan is to ask the person taking the survey how many people they know well enough to possibly have a crush on them and how many people they actually have a crush on.

I have created a Survey for this and would be really happy if you would participate. 

The next stepp would be to discuss how certain signals a person can give you raise the probability of them having a crush on you. That part is quite difficult. I think probably the best way would be to check how your friends react to certain situations and what body language they show you and then, if you find out someone has a crush on you, to look up what he did differently from people who are merely your friends. I am currently not in a good position to do this experiment but if someone wants to try or has results about this to share please do so. However I think this part is less important than finding the prior, because most people have at least a general idea about what certain signals mean from personal experience while at least I have no idea at all what the prior might be.

Less Wrong Podcast Queries

3 Cariyaga 14 May 2015 11:52AM

I have a friend for whom I wish to purchase the Less Wrong podcasts that are available on Castify, as they have difficulty reading articles of any length (due to a combination of reading comprehension issues and being easily distracted from it). I've a few questions before I start doing so, however.

First, regarding the quality of the audioblogs themselves. Are they good quality for listening to? Moreover, is the material they cover comprehensible? I must confess, I've never been much of an audio learner myself, so I can't say for certain whether Less Wrong would translate well to such a format, but of those for whom that is not an issue, have you found the Less Wrong casts to be understandable sufficiently well for a first-time listener? If not, is there any advice I might offer them in their listening?

Further, while my reading of Less Wrong was somewhat random, I'd like to know in which order I should provide those articles that are available on Castify. I may have to perform my own reading of some of the more necessary articles, if they are not present early in the casts (Knowing about Biases Can Hurt People, for instance, I feel should be one of the first posts they hear).

Thanks for any help you can offer.

 

LW survey: Effective Altruists and donations

17 gwern 14 May 2015 12:44AM

(Markdown source)

“Portrait of EAs I know”, su3su2u1:

But I note from googling for surveys that the median charitable donation for an EA in the Less Wrong survey was 0.

Yvain:

Two years ago I got a paying residency, and since then I’ve been donating 10% of my salary, which works out to about $5,000 a year. In two years I’ll graduate residency, start making doctor money, and then I hope to be able to donate maybe eventually as much as $25,000 - $50,000 per year. But if you’d caught me five years ago, I would have been one of those people who wrote a lot about it and was very excited about it but put down $0 in donations on the survey.

Data preparation:

set.seed(2015-05-13)
survey2013 <- read.csv("http://www.gwern.net/docs/lwsurvey/2013.csv", header=TRUE)
survey2013$EffectiveAltruism2 <- NA
s2013 <- subset(survey2013, select=c(Charity,Effective.Altruism,EffectiveAltruism2,Work.Status,
Profession,Degree,Age,Income))
colnames(s2013) <- c("Charity","EffectiveAltruism","EffectiveAltruism2","WorkStatus","Profession",
"Degree","Age","Income")
s2013$Year <- 2013
survey2014 <- read.csv("http://www.gwern.net/docs/lwsurvey/2014.csv", header=TRUE)
s2014 <- subset(survey2014, PreviousSurveys!="Yes", select=c(Charity,EffectiveAltruism,EffectiveAltruism2,
WorkStatus,Profession,Degree,Age,Income))
s2014$Year <- 2014
survey <- rbind(s2013, s2014)
# replace empty fields with NAs:
survey[survey==""] <- NA; survey[survey==" "] <- NA
# convert money amounts from string to number:
survey$Charity <- as.numeric(as.character(survey$Charity))
survey$Income <- as.numeric(as.character(survey$Income))
# both Charity & Income are skewed, like most monetary amounts, so log transform as well:
survey$CharityLog <- log1p(survey$Charity)
survey$IncomeLog <- log1p(survey$Income)
# age:
survey$Age <- as.integer(as.character(survey$Age))
# prodigy or no, I disbelieve any LW readers are <10yo (bad data? malicious responses?):
survey$Age <- ifelse(survey$Age >= 10, survey$Age, NA)
# convert Yes/No to boolean TRUE/FALSE:
survey$EffectiveAltruism <- (survey$EffectiveAltruism == "Yes")
survey$EffectiveAltruism2 <- (survey$EffectiveAltruism2 == "Yes")
summary(survey)
## Charity EffectiveAltruism EffectiveAltruism2 WorkStatus
## Min. : 0.000 Mode :logical Mode :logical Student :905
## 1st Qu.: 0.000 FALSE:1202 FALSE:450 For-profit work :736
## Median : 50.000 TRUE :564 TRUE :45 Self-employed :154
## Mean : 1070.931 NA's :487 NA's :1758 Unemployed :149
## 3rd Qu.: 400.000 Academics (on the teaching side):104
## Max. :110000.000 (Other) :179
## NA's :654 NA's : 26
## Profession Degree Age
## Computers (practical: IT programming etc.) :478 Bachelor's :774 Min. :13.00000
## Other :222 High school:597 1st Qu.:21.00000
## Computers (practical: IT, programming, etc.):201 Master's :419 Median :25.00000
## Mathematics :185 None :125 Mean :27.32494
## Engineering :170 Ph D. :125 3rd Qu.:31.00000
## (Other) :947 (Other) :189 Max. :72.00000
## NA's : 50 NA's : 24 NA's :28
## Income Year CharityLog IncomeLog
## Min. : 0.00 2013:1547 Min. : 0.000000 Min. : 0.000000
## 1st Qu.: 10000.00 2014: 706 1st Qu.: 0.000000 1st Qu.: 9.210440
## Median : 33000.00 Median : 3.931826 Median :10.404293
## Mean : 75355.69 Mean : 3.591102 Mean : 9.196442
## 3rd Qu.: 80000.00 3rd Qu.: 5.993961 3rd Qu.:11.289794
## Max. :10000000.00 Max. :11.608245 Max. :16.118096
## NA's :993 NA's :654 NA's :993
# lavaan doesn't like categorical variables and doesn't automatically expand out into dummies like lm/glm,
# so have to create the dummies myself:
survey$Degree <- gsub("2","two",survey$Degree)
survey$Degree <- gsub("'","",survey$Degree)
survey$Degree <- gsub("/","",survey$Degree)
survey$WorkStatus <- gsub("-","", gsub("\\(","",gsub("\\)","",survey$WorkStatus)))
library(qdapTools)
survey <- cbind(survey, mtabulate(strsplit(gsub(" ", "", as.character(survey$Degree)), ",")),
mtabulate(strsplit(gsub(" ", "", as.character(survey$WorkStatus)), ",")))
write.csv(survey, file="2013-2014-lw-ea.csv", row.names=FALSE)

Analysis:

survey <- read.csv("http://www.gwern.net/docs/lwsurvey/2013-2014-lw-ea.csv")
# treat year as factor for fixed effect:
survey$Year <- as.factor(survey$Year)
median(survey[survey$EffectiveAltruism,]$Charity, na.rm=TRUE)
## [1] 100
median(survey[!survey$EffectiveAltruism,]$Charity, na.rm=TRUE)
## [1] 42.5
# t-tests are inappropriate due to non-normal distribution of donations:
wilcox.test(Charity ~ EffectiveAltruism, conf.int=TRUE, data=survey)
## Wilcoxon rank sum test with continuity correction
##
## data: Charity by EffectiveAltruism
## W = 214215, p-value = 4.811186e-08
## alternative hypothesis: true location shift is not equal to 0
## 95% confidence interval:
## -4.999992987e+01 -1.275881408e-05
## sample estimates:
## difference in location
## -19.99996543
library(ggplot2)
qplot(Age, CharityLog, color=EffectiveAltruism, data=survey) + geom_point(size=I(3))
## https://i.imgur.com/wd5blg8.png
qplot(Age, CharityLog, color=EffectiveAltruism,
data=na.omit(subset(survey, select=c(Age, CharityLog, EffectiveAltruism)))) +
 geom_point(size=I(3)) + stat_smooth()
## https://i.imgur.com/UGqf8wn.png
# you might think that we can't treat Age linearly because this looks like a quadratic or
# logarithm, but when I fitted some curves, charity donations did not seem to flatten out
# appropriately, and the GAM/loess wiggly-but-increasing line seems like a better summary.
# Try looking at the asymptotes & quadratics split by group as follows:
#
## n1 <- nls(CharityLog ~ SSasymp(as.integer(Age), Asym, r0, lrc),
## data=survey[survey$EffectiveAltruism,], start=list(Asym=6.88, r0=-4, lrc=-3))
## n2 <- nls(CharityLog ~ SSasymp(as.integer(Age), Asym, r0, lrc),
## data=survey[!survey$EffectiveAltruism,], start=list(Asym=6.88, r0=-4, lrc=-3))
## with(survey, plot(Age, CharityLog))
## points(predict(n1, newdata=data.frame(Age=0:70)), col="blue")
## points(predict(n2, newdata=data.frame(Age=0:70)), col="red")
##
## l1 <- lm(CharityLog ~ Age + I(Age^2), data=survey[survey$EffectiveAltruism,])
## l2 <- lm(CharityLog ~ Age + I(Age^2), data=survey[!survey$EffectiveAltruism,])
## with(survey, plot(Age, CharityLog));
## points(predict(l1, newdata=data.frame(Age=0:70)), col="blue")
## points(predict(l2, newdata=data.frame(Age=0:70)), col="red")
#
# So I will treat Age as a linear additive sort of thing.

2013-2014 LW survey respondents: self-reported charity donation vs self-reported age, split by self-identifying as EA or not Likewise, but with GAM-smoothed curves for EA vs non-EA

# for the regression, we want to combine EffectiveAltruism/EffectiveAltruism2 into a single measure, EA, so
# a latent variable in a SEM; then we use EA plus the other covariates to estimate the CharityLog.
library(lavaan)
model1 <- " # estimate EA latent variable:
 EA =~ EffectiveAltruism + EffectiveAltruism2
 CharityLog ~ EA + Age + IncomeLog + Year +
 # Degree dummies:
 None + Highschool + twoyeardegree + Bachelors + Masters + Other +
 MDJDotherprofessionaldegree + PhD. +
 # WorkStatus dummies:
 Independentlywealthy + Governmentwork + Forprofitwork +
 Selfemployed + Nonprofitwork + Academicsontheteachingside +
 Student + Homemaker + Unemployed
 "
fit1 <- sem(model = model1, missing="fiml", data = survey); summary(fit1)
## lavaan (0.5-16) converged normally after 197 iterations
##
## Number of observations 2253
##
## Number of missing patterns 22
##
## Estimator ML
## Minimum Function Test Statistic 90.659
## Degrees of freedom 40
## P-value (Chi-square) 0.000
##
## Parameter estimates:
##
## Information Observed
## Standard Errors Standard
##
## Estimate Std.err Z-value P(>|z|)
## Latent variables:
## EA =~
## EffectvAltrsm 1.000
## EffctvAltrsm2 0.355 0.123 2.878 0.004
##
## Regressions:
## CharityLog ~
## EA 1.807 0.621 2.910 0.004
## Age 0.085 0.009 9.527 0.000
## IncomeLog 0.241 0.023 10.468 0.000
## Year 0.319 0.157 2.024 0.043
## None -1.688 2.079 -0.812 0.417
## Highschool -1.923 2.059 -0.934 0.350
## twoyeardegree -1.686 2.081 -0.810 0.418
## Bachelors -1.784 2.050 -0.870 0.384
## Masters -2.007 2.060 -0.974 0.330
## Other -2.219 2.142 -1.036 0.300
## MDJDthrprfssn -1.298 2.095 -0.619 0.536
## PhD. -1.977 2.079 -0.951 0.341
## Indpndntlywlt 1.175 2.119 0.555 0.579
## Governmentwrk 1.183 1.969 0.601 0.548
## Forprofitwork 0.677 1.940 0.349 0.727
## Selfemployed 0.603 1.955 0.309 0.758
## Nonprofitwork 0.765 1.973 0.388 0.698
## Acdmcsnthtchn 1.087 1.970 0.551 0.581
## Student 0.879 1.941 0.453 0.650
## Homemaker 1.071 2.498 0.429 0.668
## Unemployed 0.606 1.956 0.310 0.757
##
## Intercepts:
## EffectvAltrsm 0.319 0.011 28.788 0.000
## EffctvAltrsm2 0.109 0.012 8.852 0.000
## CharityLog -0.284 0.737 -0.385 0.700
## EA 0.000
##
## Variances:
## EffectvAltrsm 0.050 0.056
## EffctvAltrsm2 0.064 0.008
## CharityLog 7.058 0.314
## EA 0.168 0.056
# simplify:
model2 <- " # estimate EA latent variable:
 EA =~ EffectiveAltruism + EffectiveAltruism2
 CharityLog ~ EA + Age + IncomeLog + Year
 "
fit2 <- sem(model = model2, missing="fiml", data = survey); summary(fit2)
## lavaan (0.5-16) converged normally after 55 iterations
##
## Number of observations 2253
##
## Number of missing patterns 22
##
## Estimator ML
## Minimum Function Test Statistic 70.134
## Degrees of freedom 6
## P-value (Chi-square) 0.000
##
## Parameter estimates:
##
## Information Observed
## Standard Errors Standard
##
## Estimate Std.err Z-value P(>|z|)
## Latent variables:
## EA =~
## EffectvAltrsm 1.000
## EffctvAltrsm2 0.353 0.125 2.832 0.005
##
## Regressions:
## CharityLog ~
## EA 1.770 0.619 2.858 0.004
## Age 0.085 0.009 9.513 0.000
## IncomeLog 0.241 0.023 10.550 0.000
## Year 0.329 0.156 2.114 0.035
##
## Intercepts:
## EffectvAltrsm 0.319 0.011 28.788 0.000
## EffctvAltrsm2 0.109 0.012 8.854 0.000
## CharityLog -1.331 0.317 -4.201 0.000
## EA 0.000
##
## Variances:
## EffectvAltrsm 0.049 0.057
## EffctvAltrsm2 0.064 0.008
## CharityLog 7.111 0.314
## EA 0.169 0.058
# simplify even further:
summary(lm(CharityLog ~ EffectiveAltruism + EffectiveAltruism2 + Age + IncomeLog, data=survey))
## ...Residuals:
## Min 1Q Median 3Q Max
## -7.6813410 -1.7922422 0.3325694 1.8440610 6.5913961
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.06062203 0.57659518 -3.57378 0.00040242
## EffectiveAltruismTRUE 1.26761425 0.37515124 3.37894 0.00081163
## EffectiveAltruism2TRUE 0.03596335 0.54563991 0.06591 0.94748766
## Age 0.09411164 0.01869218 5.03481 7.7527e-07
## IncomeLog 0.32140793 0.04598392 6.98957 1.4511e-11
##
## Residual standard error: 2.652323 on 342 degrees of freedom
## (1906 observations deleted due to missingness)
## Multiple R-squared: 0.2569577, Adjusted R-squared: 0.2482672
## F-statistic: 29.56748 on 4 and 342 DF, p-value: < 2.2204e-16

Note these increases are on a log-dollars scale.

[link] Bayesian inference with probabilistic population codes

8 Gunnar_Zarncke 13 May 2015 09:11PM

Bayesian inference with probabilistic population codes by Wei Ji Ma et al 2006

Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes’ rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poissonlike variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.

Note that "humans perform near-optimal Bayesian inference" refers to the integration of information - not conscious symbolic reasoning. Nonetheless I think this is of interest here. 

[Link] Promoting rationality in higher education media channels

4 Gleb_Tsipursky 13 May 2015 04:51PM

Glad to share an op-ed piece I published in one of the most premier higher education media channels on how I as a professor used rationality-informed strategies to deal with mental illness in the classroom. This is part of my broader project to promote rationality to a broad audience and thus raise the sanity waterline, so good news on that front. I'd also be glad to hear your advice about other strategies to promote rationality broadly, and also any collaboration you may be interested in doing together around such public outreach.

LW should go into mainstream academia ?

5 Marlon 13 May 2015 01:23PM

I feel that a lot of what's in LW (written by Eliezer or others) should be in mainstream academia. Not necessarily the most controversial views (the insistence on the MW hypothesis, cryonics, the FAI ...), but a lot of the work on overcoming biases should be there, be criticized there and be improved there. 

For example, a few debiasing methods and a more formal explanation of LW's peculiar solution to free will (and more, these are only examples).

 

I don't really get why LW's content isn't in mainstream academia to be honest. 

 

I get that peer review is not the best (far from it, although it's still the best we have, and post-publication peer-review is also improving, see PubPeer), that some would too readily dismiss LW's content, but not all. Lots would play by the rules and provide genuine criticisms during peer-review (which will lead to the alteration of the content of course), along with criticisms post publication. This is in my opinion something that has to happen.

 

LW, Eliezer, etc, can't stay on the "crank" level, not playing by the rules, publishing books and no papers. Blogs are indeed faster and reach a bigger amount of people, but I'm not arguing for only publishing in academia. Blogs can (and should) continue. 

 

Tell me what you think, as I seem to have missed something with this topic.

[Link] Death with Dignity by Scott Adams

2 Gunnar_Zarncke 12 May 2015 09:34PM

Over at Scott Adams' Blog you can find a very fine example of using the 'Rationality Engine' to solve the social problem of assisted dying.

 

Wild Moral Dilemmas

17 sixes_and_sevens 12 May 2015 12:56PM

[CW: This post talks about personal experience of moral dilemmas. I can see how some people might be distressed by thinking about this.]

Have you ever had to decide between pushing a fat person onto some train tracks or letting five other people get hit by a train? Maybe you have a more exciting commute than I do, but for me it's just never come up.

In spite of this, I'm unusually prepared for a trolley problem, in a way I'm not prepared for, say, being offered a high-paying job at an unquantifiably-evil company. Similarly, if a friend asked me to lie to another friend about something important to them, I probably wouldn't carry out a utilitarian cost-benefit analysis. It seems that I'm happy to adopt consequentialist policy, but when it comes to personal quandaries where I have to decide for myself, I start asking myself about what sort of person this decision makes me. What's more, I'm not sure this is necessarily a bad heuristic in a social context.

It's also noteworthy (to me, at least) that I rarely experience moral dilemmas. They just don't happen all that often. I like to think I have a reasonably coherent moral framework, but do I really need one? Do I just lead a very morally-inert life? Or have abstruse thought experiments in moral philosophy equipped me with broader principles under which would-be moral dilemmas are resolved before they reach my conscious deliberation?

To make sure I'm not giving too much weight to my own experiences, I thought I'd put a few questions to a wider audience:

- What kind of moral dilemmas do you actually encounter?

- Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?

- Do you have any examples of pedestrian moral dilemmas to which you've applied abstract moral reasoning? How did that work out?

- Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?

The Username/password anonymous account is, as always, available.

Thoughts on minimizing designer baby drama

15 John_Maxwell_IV 12 May 2015 11:22AM

I previously wrote a post hypothesizing that inter-group conflict is more common when most humans belong to readily identifiable, discrete factions.

This seems relevant to the recent human gene editing advance.  Full human gene editing capability probably won't come soon, but this got me thinking anyway.  Consider the following two scenarios:

1. Designer babies become socially acceptable and widespread some time in the near future.  Because our knowledge of the human genome is still maturing, they initially aren't that much different than regular humans.  As our knowledge matures, they get better and better.  Fortunately, there's a large population of "semi-enhanced" humans from the early days of designer babies to keep the peace between the "fully enhanced" and "not at all enhanced" factions.

2. Designer babies are considered socially unacceptable in many parts of the world.  Meanwhile, the technology needed to produce them continues to advance.  At a certain point people start having them anyway.  By this point the technology has advanced to the point where designer babies clearly outclass regular babies at everything, and there's a schism between "fully enhanced" and "not at all enhanced" humans.

Of course, there's another scenario where designer babies just never become widespread.  But that seems like an unstable equilibrium given the 100+ sovereign countries in the world, each with their own set of laws, and the desire of parents everywhere to give birth to the best kids possible.

We already see tons of drama related to the current inequalities between individuals, especially inequality that's allegedly genetic in origin.  Designer babies might shape up to be the greatest internet flame war of this century.  This flame war could spill over in to real world violence.  But since one of the parties has not arrived to the flame war yet, maybe we can prepare.

One way to prepare might be differential technological development.  In particular, maybe it's possible to decrease the cost of gene editing/selection technologies while retarding advances in our knowledge of which genes contribute to intelligence.  This could allow designer baby technology to become socially acceptable and widespread before "fully enhanced" humans were possible.  Just as with emulations, a slow societal transition seems preferable to a fast one.

Other ideas (edit: speculative!): extend the benefits of designer babies to everyone for free regardless of their social class.  Push for mandatory birth control technology so unwanted and therefore unenhanced babies are no longer a thing.  (Imagine how lousy it would be to be born as an unwanted child in a world where everyone was enhanced except you.)  Require designer babies to possess genes for compassion, benevolence, and reflectiveness by law, and try to discover those genes before we discover genes for intelligence.  (Researching the genetic basis of psychopathy to prevent enhanced psychopaths also seems like a good idea.)  Regulate the modification of genes like height if game theory suggests allowing arbitrary modifications to them would be a bad idea.

I don't know very much about the details of these technologies, and I'm open to radically revising my views if I'm missing something important.  Please tell me if there's anything I got wrong in the comments.

A heuristic for predicting minor depression in others and myself, and related things

1 DeVliegendeHollander 12 May 2015 08:24AM

Summary

Look at how you or other people walk. Then going a bit meta.

Disclaimer

This post is probably not high quality enough to deserve to be top level purely on its qualitative merits. However I think the sheer importance of the issue for human well-being makes it so. Please consider importance / potential utility of the whole discussion and not just the post, and not only quality when voting.

The problem

Minor depression is not really an accurately defined, easily recognizable thing. First of  all there are people with hard, boring or otherwise unsatisfactory life who are unhappy about it, how can one tell this normal, justifiable unhappiness from minor depression? Especially that therapists often say having good reasons to be depressed still counts as one, so at that point you don't really know whether to focus on fixing your mind or fixing your life. Then a lot of things that don't even register as direct sadness or unhappiness are considered part of or related to depression, such as lethargy/low energy/low motivation, irritability/aggressiveness, eating disorders, and so on. How could you tell if you are just a bad tempered lazy glutton or depressed? And finally, don't cultural expectations play a role, such as how Americans tend to be optimistic and expect a happy, pursue-shiny-things life, while e.g. Finns not really? 

Of course there are clinical diagnosis methods, but people will ask a therapist for a diagnosis only when they already suspect something is wrong. They must think "Jolly gee, I really should feel better than I do now, it is not really normal to feel so, better ask a shrink!" But often it is not so. Often it is like "My mind is normal. It is life that sucks."  So by what heuristic could you tell whether there is something wrong with yourself or other people?

Basis

This is heuristic I built mainly on observational correlations plus some psychological parallels. Has nothing to do with accepted medical science or experts opinion. My goal isn't as much as to convince you this is a good heuristic, but to open an open-ended discussion, asking you if it seems to be a good one, and also trigger a discussion where you propose other methods.

How I think non-depressed men walk

"Having a spring in the step." This old saying is IMHO surprisingly apt. I like this drawing  - NOT because I think depression is based on T levels, but I think this cartoonishly over-exaggerated body language is fairly expressive of the idea. For all I know this seems more of a dopamine thing, eagerness, looking forward not testosterone.

It seems to me non-depressed men push themselves forward with their rear leg, heels raised, calves engaged, almost like jumping forward. This is the "spring" in the step. The actual spring is the rear leg calf muscle. Often this is accompanied by a movement of arms while walking.  A slight rocking or swaying of the NOT hips but chest / shoulders may also be part of it, but I think it is less relevant. The general message / feel is "I'm so eager to tackle challenges! That's fun!"

Psychologically, I think all this eagerly-looking-forward-to-challenges spring in the step means a mindset where you are not afraid of the future, but not because you think it will be smooth sailing, but because you are confident in yourself to be able to tackle challenges and even enjoy doing so. This seems like a healthy mindset.

How I think depressed men walk

Dragging feet. Dragging a slouched, sack-like, non-tension upper body. Leaning forward. Head down. Shoulders pulled up, hunched up to protect the neck, engaging the upper trapezius muscles. A chronic pain in the upper traps (from their constant engagement), when having your upper traps massaged feels SO good, may be a predictive sign of it. Comes accross as embarrassed, scolded-boy body language.

Another way of walking I noticed on myself and probably counts as depressed is the duck-walk. The movement is started by the upper body slightly "falling" forward, the center of gravity starting to go forward, then "catching" the fall by sticking forward a leg, and the foot hits the ground flat, not with the front part of the foot but the whole foot, like a duck.Basically your heels are almost never raised and calves are not engaged much. This would be impossible / difficult if you had a springy step i.e. pushing forward with the rear leg, you would have to raise a heel for that, but possible if you fall forward and catch, fall forward and catch. Often not raising feet high (related verbs: to scuff, to shuffle). 

How I think non-depressed women walk

Generally speaking  I use the same heuristic for women who seem like they are  "one of the boys" type  (i.e. those who wear comfortable sports shoes, focus on career goals not seducing men etc.) 

But this clearly does not work with all women, for example, that springy step thing is pretty much impossible in stillettoes for example. Rather I think non-depressed women often tend to sway the hips. It is an unconscious enjoyment of their own femininity and sexiness, not a show put on for the sake of men.

I don't really have clear ideas of how depressed women walk, all I can offer is not like the above. When both the eager spring and the sexy hip sway are missing, it may be a sign.

For people of non-binary gender and other special cases: again all I can offer is that if you are non-depressed, you probably have either the eager spring or the hip sway.

Am I putting the bar too high? False positives?

Is it possible that it is a too "strict" heuristic? While I think these heuristics are generally true for peopel who are in an excellent emotional shape, feel confident, love them some challenges, feel sexy etc. this may be possible that this emotional shape is higher than the waterline for depression, it is possible that some people are not depressed and yet below this like, have less confidence, less eager, happy expectation, less self-conscious sexiness or something like that.

Essentially I think my method does not really have many false negatives, but could possibly yield false positives.

Have you seen many cases that would count as false positives?

Meta: why is minor depression so difficult to tell / diagnose accurately?

There are clinically made checklists, but they sound like a collection of unrelated things.  Could really the same thing cause you to sleep too much or not enough, eat too much or not enough? Doesn't it sound like Selling Nonapples? Putting everybody who does not have just the perfect sleeping or eating habits into one common category called depression? 

For example in the West most people see depression as "the blues" i.e. some form of sadness. But often people don't report feeling sad, but report being very lethargic and not having energy and motivation and that, too, is often seen as depression. Some people are just negative and bitter and not enjoy anything, and yet they don't see it as their own sadness but more like "life is hard". I guess in both cases it is more line internalizing sadness, considering being sad a normal thing, and not really expecting to feel good. (This may be the case of mine and surprisingly many people in my family / relatives. A life-is-tough, survivalist ethos, not fun ethos.) 

Then you go outside the West and you find even more different things. I cannot find my source anymore, but I remember a story that in a culture like Mali women generally don't express their emotions, are not conscious of them, and there depression is diagnosed through physical symptoms like chest pain. 

Is minor depression an apple or a nonapple? A thing, one thing, or a generic "anything but normal happiness" bin?

I think my walking heuristic does predict something, and that something is probably close enough to the idea of minor depression, but whether it is a too broad tool with many false positives, or whether it predicts only a narrowly specific case of depressions, I cannot really tell and basically I asking you here whether it matches your experiences or not.

What are your heuristics? What would be a low false positives easy heuristic?

P.S. Researchers found a reverse link saying walking in a happy or depressed style _causes_ mood changes. It seems the article assumes everybody knows what walking in a happy or depressed style means. In fact this is what I am trying to find out here!

P.P.S. I know I suck at writing, so let me try to reformulate the main point a different way: we know people cannot be happy all the time and often have such a unsatisfying life that they are rarely happy. How can we find the thin line between being normal common life dissatisfaction based unhappiness (hard or boring life) and minor depression? Can walking style be used as a good predictor of specifically this thin line?

Open Thread, May 11 - May 17, 2015

3 Gondolinian 11 May 2015 12:16AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Bragging Thread May 2015

6 Morendil 10 May 2015 01:25PM

Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on"Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

(Previous Bragging Thread)

Looking to restart Madison LW meetups, in need of regulars

5 wobster109 10 May 2015 03:20AM

Hi everyone,

We haven't been having regular meetups in Madison, WI for a while (as far as I'm aware), so I'd love to get those going again! Organizing is actually terrifying for me: what if only one person comes, and that person is disappointed? So I'm looking for regulars. All you have to do is commit to attending one or two events a month, things like nature hikes, study halls, and brunches. I'll provide food, drink, optional cats for petting, and transportation with enough advance notice. Please email me if you're interested (wobster0109@gmail.com).

Thanks a bunch, have a fun weekend!

The Mr. Hyde of Oxytocin

4 theowl 10 May 2015 12:42AM

What comes to mind when you hear the word ‘oxytocin?’ Is it ‘love’, ‘cuddle hormone’, ‘bliss?’ If so, you may be more aware of the Dr. Jekyll of oxytocin rather than the Mr. Hyde. Oxytocin, just like almost every biochemical molecule, is hormetic. It confers positive effects in one context, but negative in another. In the case of oxytocin, a person with a secure attachment style interacting with a familiar group of people that he/she likes, will experience the positive effects of oxytocin. However, someone with an anxious attachment style interacting with a group of people that he/she does not yet fully feel trusting and familiar with will experience the negative effects of oxytocin. Why does the same molecule produce pro-social effects for one person, yet anti-social for another?

Oxytocin redirects more attentional resources towards noticing social stimuli. This increase in the salience of social information enhances the ability to detect expressions, recognize faces, and other social cues. The effect of increased social cognitive abilities is constrained by personality traits and situational context, resulting in either anti-social or pro-social behavior.

Oxytocin also promotes more interest in social cues by increasing affiliative motivation, a desire to get along with others. The increase in affiliative motivation results in pro-social behavior if the person already tends towards having an interest in bonding with people outside their close friend circle. However, an increase in affiliative motivation for those with anxious attachment styles results in a stronger pursuit to feel closer to only the person he/she is attached to.

A couple, Tom and Mary, have just moved to a new town and are attending their first service at a new church. Tom has a secure attachment style and isn’t prone to social anxieties. Tom is optimistic, has a positive bias, is generally content, and sees people as good, trusting, and friendly. Mary has an anxious attachment style, a negative bias, social anxiety, baseline mood neutral, and sees people as potential threats, competitors, untrustworthy, selfish, and egotistical. During the service, Tom and Mary’s oxytocin levels increase by being in a community. As a result of their different dispositions, Tom exhibits the Dr. Jekyll of oxytocin, whereas Mary exhibits the Mr. Hyde.

At the end of the service, Mary determines that she doesn’t like the church, whereas Tom thinks it is perfect. Mary felt that the people were judgmental and that they didn’t like her and Tom. Tom felt that the people were friendly, accepting, and eager for them to join.

Most social cues are ambiguous. A person’s character traits are instrumental in  interpreting the cues as negative or positive. Tom is more likely to interpret facial expressions as positive, whereas Mary sees them as negative. Tom interprets neutral expressions to indicate acceptance, kindness, and friendliness. Mary sees neutral expressions as judgmental and unkind. This creates a fear of rejection, feeling threatened, and propagates a negative bias.

The increase in oxytocin leads to quicker detection and interpretation of facial expressions. Interpreting inchoate facial expressions fosters interpretations based on expectations versus what is actually intended. A person is starting to smile, but before the smile is developed, Mary believes that the person is about to laugh and ridicule her. Mary then scowls at her, turning what was going to be a smile into a negative expression. Tom interprets the inchoate expression as a smile, smiles, and turns the inchoate expression into a genuine smile.

Oxytocin amplifies one’s character traits of pro-social or anti-social tendencies. Oxytocin does increase the feelings of bonding for all, but in different ways. People with pro-social tendencies will feel closer to their communities and greater circle of friends. People with anti-social tendencies will just feel closer to their close circle of friends and people they already trust.

Cross-posted from my blog: https://evolvingwithtechnology.wordpress.com

References:

http://dept.psych.columbia.edu/~kochsner/pdf/Bartz_et_al_2011_Social_oxytocin.pdf

http://www.attachedthebook.com/about-the-book/ by Amir Levine and Rachel Heller.

Concept Safety: World-models as tools

5 Kaj_Sotala 09 May 2015 12:07PM

 I'm currently reading through some relevant literature for preparing my FLI grant proposal on the topic of concept learning and AI safety. I figured that I might as well write down the research ideas I get while doing so, so as to get some feedback and clarify my thoughts. I will posting these in a series of "Concept Safety"-titled articles.

The AI in the quantum box

In the previous post, I discussed the example of an AI whose concept space and goals were defined in terms of classical physics, which then learned about quantum mechanics. Let's elaborate on that scenario a little more.

I wish to zoom in on a certain assumption that I've noticed in previous discussions of these kinds of examples. Although I couldn't track down an exact citation right now, I'm pretty confident that I've heard the QM scenario framed as something like "the AI previously thought in terms of classical mechanics, but then it finds out that the world actually runs on quantum mechanics". The key assumption being that quantum mechanics is in some sense more real than classical mechanics.

This kind of an assumption is a natural one to make if someone is operating on an AIXI-inspired model of AI. Although AIXI considers an infinite amount of world-models, there's a sense in which AIXI always strives to only have one world-model. It's always looking for the simplest possible Turing machine that would produce all of the observations that it has seen so far, while ignoring the computational cost of actually running that machine. AIXI, upon finding out about quantum mechanics, would attempt to update its world-model into one that only contained QM primitives and to derive all macro-scale events right from first principles.

No sane design for a real-world AI would try to do this. Instead, a real-world AI would take advantage of scale separation. This refers to the fact that physical systems can be modeled on a variety of different scales, and it is in many cases sufficient to model them in terms of concepts that are defined in terms of higher-scale phenomena. In practice, the AI would have a number of different world-models, each of them being applied in different situations and for different purposes.

Here we get back to the view of concepts as tools, which I discussed in the previous post. An AI that was doing something akin to reinforcement learning would come to learn the kinds of world-models that gave it the highest rewards, and to selectively employ different world-models based on what was the best thing to do in each situation.

As a toy example, consider an AI that can choose to run a low-resolution or a high-resolution psychological model of someone it's interacting with, in order to predict their responses and please them. Say the low-resolution model takes a second to run and is 80% accurate; the high-resolution model takes five seconds to run and is 95% accurate. Which model will be chosen as the one to be used will depend on the cost matrix of making a correct prediction, making a false prediction, and the consequence of making the other person wait for an extra four seconds before the AI's each reply.

We can now see that a world-model being the most real, i.e. making the most accurate predictions, doesn't automatically mean that it will be used. It also needs to be fast enough to run, and the predictions need to be useful for achieving something that the AI cares about.

World-models as tools

From this point of view, world-models are literally tools just like any other. Traditionally in reinforcement learning, we would define the value of a policy  in state s as the expected reward given the state s and the policy ,

but under the "world-models are tools" perspective, we need to also condition on the world-model m,

 .

We are conditioning on the world-model in several distinct ways.

First, there is the expected behavior of the world as predicted by world-model m. A world-model over the laws of social interaction would do poorly at predicting the movement of celestial objects, if it could be applied to them at all. Different predictions of behavior may also lead to differing predictions of the value of a state. This is described by the equation above.

Second, there is the expected cost of using the world-model. Using a more detailed world-model may be more computationally expensive, for instance. One way of interpreting this in a classical RL framework would be that using a specific world-model will place the agent in a different state than using some other world-model. We might describe by saying that in addition to the agent choosing its next action a on each time-step, the agent also needs to choose the world-model m which it will use to analyze its next observations. This will be one of the inputs for the transition function  to the next state.

Third, there is the expected behavior of the agent using world-model m. An agent with different beliefs about the world will act differently in the future: this means that the future policy  actually depends on the chosen world-model.

Some very interesting questions pop up at this point. Your currently selected world-model is what you use to evaluate your best choices for the next step... including the choice of what world-model to use next. So whether or not you're going to switch to a different world-model for evaluating the next step depends on whether your current world-model says that a different world-model would be better in that step.

We have not fully defined what exactly we mean by "world-models" here. Previously I gave the example of a world-model over the laws of social interaction, versus a world-model over the laws of physics. But a world-model over the laws of social interaction, say, would not have an answer to the question of which world-model to use for things it couldn't predict. So one approach would be to say that we actually have some meta-model over world-models, telling us which is the best to use in what situation.

On the other hand, it does also seem like humans often use a specific world-model and its predictions to determine whether to choose another world-model. For example, in rationalist circles you often see arguments to the line of, "self-deception might give you extra confidence, but it introduces errors into your world-model, and in the long term those are going to be more harmful than the extra confidence is beneficial". Here you see an implicit appeal to a world-model which predicts an accumulation of false beliefs with some specific effects, as well as predicting the extra self-esteem with its effects. But this kind of an analysis incorporates very specific causal claims from various (e.g. psychological) models, which are models over the world rather than just being part of some general meta-model over models. Notice also that the example analysis takes into account the way that having a specific world-model affects the state transition function: it assumes that a self-deceptive model may land us in a state where we have a higher self-esteem.

It's possible to get stuck in one world-model: for example, a strongly non-reductionist model evaluating the claims of a highly reductionist one might think it obviously crazy, and vice versa. So it seems that we do need something like a meta-evaluation function. Otherwise it would be too easy to get stuck in one model which claimed that it was the best one in every possible situation, and never agreed to "give up control" in favor of another one.

One possibility for such a thing would be a relatively model-free learning mechanism, which just kept track of the rewards accumulated when using a particular model in a particular situation. It would then bias the selection of the model towards the direction of the model that had been the most successful so far.

Human neuroscience and meta-models

We might be able to identify something like this in humans, though this is currently very speculative on my part. Action selection is carried out in the basal ganglia: different brain systems send the basal ganglia "bids" for various actions. The basal ganglia then chooses which actions to inhibit or disinhibit (by default, everything is inhibited). The basal ganglia also implements reinforcement learning, selectively strengthening or weakening the connections associated with a particular bid and context when a chosen action leads to a higher or lower reward than was expected. It seems that in addition to choosing between motor actions, the basal ganglia also chooses between different cognitive behaviors, likely even thoughts

If action selection and reinforcement learning are normal functions of the basal ganglia, it should be possible to interpret many of the human basal ganglia-related disorders in terms of selection malfunctions. For example, the akinesia of Parkinson's disease may be seen as a failure to inhibit tonic inhibitory output signals on any of the sensorimotor channels. Aspects of schizophrenia, attention deficit disorder and Tourette's syndrome could reflect different forms of failure to maintain sufficient inhibitory output activity in non-selected channels. Conseqently, insufficiently inhibited signals in non-selected target structures could interfere with the output of selected targets (expressed as motor/verbal tics) and/or make the selection system vulnerable to interruption from distracting stimuli (schizophrenia, attention deficit disorder). The opposite situation would be where the selection of one functional channel is abnormally dominant thereby making it difficult for competing events to interrupt or cause a behavioural or attentional switch. Such circumstances could underlie addictive compulsions or obsessive compulsive disorder. (Redgrave 2007)

Although I haven't seen a paper presenting evidence for this particular claim, it seems plausible to assume that humans similarly come to employ new kinds of world-models based on the extent to which using a particular world-model in a particular situation gives them rewards. When a person is in a situation where they might think in terms of several different world-models, there will be neural bids associated with mental activities that recruit the different models. Over time, the bids associated with the most successful models will become increasingly favored. This is also compatible with what we know about e.g. happy death spirals and motivated stopping: people will tend to have the kinds of thoughts which are rewarding to them.

The physicist and the AI

In my previous post, when discussing the example of the physicist who doesn't jump out of the window when they learn about QM and find out that "location" is ill-defined:

The physicist cares about QM concepts to the extent that the said concepts are linked to things that the physicist values. Maybe the physicist finds it rewarding to develop a better understanding of QM, to gain social status by making important discoveries, and to pay their rent by understanding the concepts well enough to continue to do research. These are some of the things that the QM concepts are useful for. Likely the brain has some kind of causal model indicating that the QM concepts are relevant tools for achieving those particular rewards. At the same time, the physicist also has various other things they care about, like being healthy and hanging out with their friends. These are values that can be better furthered by modeling the world in terms of classical physics. [...]

A part of this comes from the fact that the physicist's reward function remains defined over immediate sensory experiences, as well as values which are linked to those. Even if you convince yourself that the location of food is ill-defined and you thus don't need to eat, you will still suffer the negative reward of being hungry. The physicist knows that no matter how they change their definition of the world, that won't affect their actual sensory experience and the rewards they get from that.

So to prevent the AI from leaving the box by suitably redefining reality, we have to somehow find a way for the same reasoning to apply to it. I haven't worked out a rigorous definition for this, but it needs to somehow learn to care about being in the box in classical terms, and realize that no redefinition of "location" or "space" is going to alter what happens in the classical model. Also, its rewards need to be defined over models to a sufficient extent to avoid wireheading (Hibbard 2011), so that it will think that trying to leave the box by redefining things would count as self-delusion, and not accomplish the things it really cared about. This way, the AI's concept for "being in the box" should remain firmly linked to the classical interpretation of physics, not the QM interpretation of physics, because it's acting in terms of the classical model that has always given it the most reward. 

There are several parts to this.

1. The "physicist's reward function remains defined over immediate sensory experiences". Them falling down and breaking their leg is still going to hurt, and they know that this won't be changed no matter how they try to redefine reality.

2. The physicist's value function also remains defined over immediate sensory experiences. They know that jumping out of a window and ending up with all the bones in their body being broken is going to be really inconvenient even if you disregarded the physical pain. They still cannot do the things they would like to do, and they have learned that being in such a state is non-desirable. Again, this won't be affected by how they try to define reality.

We now have a somewhat better understanding of what exactly this means. The physicist has spent their entire life living in the classical world, and obtained nearly all of their rewards by thinking in terms of the classical world. As a result, using the classical model for reasoning about life has become strongly selected for. Also, the physicist's classical world-model predicts that thinking in terms of that model is a very good thing for surviving, and that trying to switch to a QM model where location was ill-defined would be a very bad thing for the goal of surviving. On the other hand, thinking in terms of exotic world-models remains a rewarding thing for goals such as obtaining social status or making interesting discoveries, so the QM model does get more strongly reinforced in that context and for that purpose.

Getting back to the question of how to make the AI stay in the box, ideally we could mimic this process, so that the AI would initially come to care about staying in the box. Then when it learns about QM, it understands that thinking in QM terms is useful for some goals, but if it were to make itself think in purely QM terms, that would cause it to leave the box. Because it is thinking mostly in terms of a classical model, which says that leaving the box would be bad (analogous to the physicist thinking mostly in terms of the classical model which says that jumping out of the window would be bad), it wants to make sure that it will continue to think in terms of the classical model when it's reasoning about its location.

The File Drawer Effect and Conformity Bias (Election Edition)

30 Salemicus 08 May 2015 04:51PM

As many of you may be aware, the UK general election took place yesterday, resulting in a surprising victory for the Conservative Party. The pre-election opinion polls predicted that the Conservatives and Labour would be roughly equal in terms of votes cast, with perhaps a small Conservative advantage leading to a hung parliament; instead the Conservatives got 36.9% of the vote to Labour's 30.4%, and won the election outright.

There has already been a lot of discussion about why the polls were wrong, from methodological problems to incorrect adjustments. But perhaps more interesting is the possibility that the polls were right! For example, Survation did a poll on the evening before the election, which predicted the correct result (Conservatives 37%, Labour 31%). However, that poll was never published because the results seemed "out of line." Survation didn't want to look silly by breaking with the herd, so they just kept quiet about their results. Naturally this makes me wonder about the existence of other unpublished polls with similar readings.

This seems to be a case of two well know problems colliding with devastating effect. Conformity bias caused Survation to ignore the data and go with what they "knew" to be the case (for which they have now paid dearly). And then the file drawer effect meant that the generally available data was skewed, misleading third parties. The scientific thing to do is to publish all data, including "outliers," both so that information can change over time rather than be anchored, and to avoid artificially compressing the variance. Interestingly, the exit poll, which had a methodology agreed beforehand and was previously committed to be published, was basically right.

This is now the third time in living memory that opinion polls have been embarrassingly wrong about the UK general election. Each time this has lead to big changes in the polling industry. I would suggest that one important scientific improvement is for polling companies to announce the methodology of a poll and any adjustments to be made before the poll takes place, and commit to publishing all polls they carry out. Once this became the norm, data from any polling company that didn't follow this practice would be rightly seen as unreliable by comparison.

View more: Next